report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The purpose of the NextGen Test Bed is to provide an environment in which laboratory testing and real-world demonstrations help to show the benefits of NextGen technologies. Furthermore, the Test Bed provides access to the systems currently used in the NAS, which allows for testing and evaluating the integration and interoperability of new technologies. The Test Bed is also meant to bring together stakeholders early in the technology development process so participants can understand the benefits of operational improvements, identify potential risks and integration and interoperability issues, and foster partnerships between government and industry. Some test facilities also serve as a forum in which private companies can learn from and partner with each other and eventually enter into technology acquisition agreements with FAA with reduced risk. Each of the NextGen test facilities that compose the NextGen Test Bed offers different testing capabilities and brings together different participants. The test facilities include: (1) the Florida Test Bed at Daytona Beach International Airport, supported by Embry-Riddle Aeronautical University (Embry-Riddle); (2) the Texas Test Bed, a National Aeronautics and Space Administration (NASA) facility near the Dallas-Fort Worth Airport; and (3) the New Jersey Test Bed located at FAA’s William J. Hughes Technical Center near Atlantic City. (See fig. 1). According to FAA, while physically in different locations, the facilities are united in their purpose and will eventually be integrated to share capabilities and information. While sharing a common purpose, each facility offers different testing capabilities and brings together different participants from different communities, as follows:  The Florida Test Bed is located in a private facility at which companies, including Lockheed Martin and Boeing, come together with academia and FAA to test technologies that fit into the NextGen vision. Private participants contribute financially to research and demonstration projects and collaborate to test concepts and technologies. These activities are guided by memorandums of understanding among all the participants. Embry-Riddle is currently working on a model agreement to govern the contributions of its private partners that will help delineate which components (hardware, software, and infrastructure) will be provided by the government and which by private participants. The model is meant to provide a cost- sharing method and also help engage participants and provide a means for them to have a vested interest in seeing the development of the technology all the way through to implementation. Currently, FAA pays the operating costs of the Florida Test Bed while Embry- Riddle and participating companies contribute technology and technical staff. Private participants may invest directly in software or hardware support. The facility—which has just undergone an expansion—provides access to the systems currently used in the NAS and to some of the major navigation, surveillance, communications, and weather information programs that are under development. It also has a dedicated area to support demonstrations and a separate space for the participating companies to test integration—where a greater contribution from the private sector is envisioned.  The Texas Test Bed is a collaborative effort between NASA and FAA built on the grounds of FAA’s Fort Worth Air Route Traffic Control Center. It supports NextGen research through field evaluations, shadow testing, the evaluation of simulations, and data collection and analysis. The researchers at the facility have agreements to receive data feeds from the airlines operating at the Dallas-Fort Worth airport, as well as various data feeds from airport and air traffic control facilities.  The New Jersey Test Bed, located at FAA’s national scientific test base, conducts research and development for new NextGen systems. In June 2010, this facility opened the NextGen Integration and Evaluation Capability area where scientists use real-time simulation to explore, integrate, and evaluate NextGen concepts, such as area navigation, trajectory-based operations, and unmanned aircraft system operations in the NAS. In addition, in 2008, FAA entered into a lease to build the Next Generation Research and Technology Park (the Park) adjacent to the New Jersey Test Bed. The Park is a partnership intended to engage industry in a broad spectrum of research projects, with access to state-of-the-art federal laboratories. The Park’s establishment is meant to encourage the transfer of scientific and technical information, data, and know-how to and from the private sector and is consistent with FAA’s technology transfer goals. (See table 1 for examples of past and planned activities at NextGen test facilities.) According to officials from the test facilities, they have made some progress in their plans to link the NextGen test facilities to integrate capabilities and share information. Linking the test facilities to leverage the benefits of each is part of the NextGen Test Bed concept. According to an FAA official, in June 2011, the Florida and New Jersey Test Beds established data integration capabilities when they were connected with FAA’s NextGen Research and Development computer network. During the summer, they used the integrated capabilities to participate in a demonstration of the Oceanic Conflict Advisory Trial (OCAT) system. In addition, the Texas Test Bed is in the final stages of being connected to FAA’s NextGen Research and Development computer network. According to officials at the Texas Test Bed, in the past year, FAA and NASA collaborated on a NextGen Test Bed capabilities analysis and developed an interagency agreement to support NextGen Test Bed collaboration. This increased level of coordination is expected to continue. In prior work on technology transfer activities, we found that the success of test facilities as a means to leverage private sector resources depends in large part on the extent to which the private sector perceives benefits to its participation. Representatives of firms participating in test facility activities told us that tangible results—that is, the implementation of technologies they helped to develop—were important to maintain the private sector’s interest. However, they said it was not always clear what happened to technologies that were successfully tested at these sites. In some cases, it was not apparent whether the technology being tested had a clear path to implementation, or whether that technology had a clear place in FAA’s NAS Enterprise Architecture Infrastructure Roadmaps. As a result, a successfully tested technology would not move to implementation in the NAS. We also found that FAA has had difficulty advancing technologies that cut across programs and offices at FAA, when there is no clear “home” or “champion” within FAA for the technology. FAA’s expansion of the Test Bed concept—linking together its testing facilities, expanding the Florida Test Bed, and building a Research and Technology Park adjacent to the New Jersey Test Bed to complement the capabilities at Embry-Riddle—is a positive step that should help to address some of these issues, allowing private sector participants to remain more involved throughout the process, with a vested interest in seeing the development of selected technologies through to successful implementation. In addition, to improve its ability to implement new technologies, FAA has begun to restructure its Air Traffic Organization (ATO), which is responsible for moving air traffic safely and efficiently, as well as for implementing NextGen. We have previously reported on problems with FAA’s management structure and oversight of NextGen acquisitions and implementation and made recommendations designed to improve FAA’s ability to manage portfolios of capabilities across program offices. To address these issues, FAA made the Deputy Administrator responsible for the NextGen organization and created a new head of program management for NextGen-related programs to ensure improved oversight of NextGen implementation. Furthermore, the ATO is in the process of being divided into two branches: operations and NextGen program management. Operations will focus on the day-to-day management of the NAS and the program management branch will be responsible for developing and implementing programs while working with operations to ensure proper integration. While a focus on accountability for NextGen implementation is a positive step and can help address issues with respect to finding the right “home” for technologies and creating a clearer path to implementation, it is too early to tell whether this reorganization will produce the desired results. Collaboration among the NextGen partner agencies also depends, in part, on their perceiving positive outcomes. NASA has historically been FAA’s primary source of long-term air traffic management research and continues to lead research and development activities for many key elements of NextGen. However, past technology transfer efforts between NASA and FAA faced challenges at the transfer point between invention and acquisition, referred to as the “valley of death.” At this point in the process, NASA has limited funding at times to continue beyond fundamental research, but the technology was not matured to a level for FAA to assume the risks of investing in a technology that had not yet been demonstrated with a prototype or similar evidence. FAA and NASA officials are both working to address this issue through interagency agreements that specify a commitment to a more advanced level of technological maturity of research that NASA has conducted in the past. Using an interagency agreement, as well as test facility demonstrations, NASA developed and successfully transferred the Traffic Management Advisor—a program that uses graphical displays and alerts to increase situational awareness for air traffic controllers and traffic management coordinators—to FAA. Through the agreement, the two agencies established the necessary data feeds and two-way computer interfaces to support the program. NASA demonstrated the system’s capabilities at the Texas Test Bed, where it also conducted operational evaluations and transferred the program to FAA, which, after reengineering it for operational use, deployed it throughout the United States. FAA has also used research transition teams to coordinate research and transfer technologies from NASA and overcome technology transfer challenges. As we have previously reported, the design of these teams is consistent with several key practices of interagency coordination we have identified. These teams identify common outcomes, establish a joint strategy to achieve that outcome, and define each agency’s role responsibilities, allowing FAA and NASA to overcome differences in agency missions, cultures, and established ways of doing business. Differences in mission priorities, however, particularly between FAA and the Department of Homeland Security (DHS), and between FAA and the Department of Defense (DOD), pose a challenge to coordination with those agencies. DHS’s diverse set of mission priorities, ranging from aviation security to border protection, affects its level of involvement in NextGen activities. Agency officials also have stated that although different offices within DHS are involved in related NextGen activities, such as security issues, the fact that NextGen implementation is not a formalized mission in DHS can affect its level of participation in NextGen activities. NextGen stakeholders reported that FAA could more effectively engage partner agencies in long-term planning by aligning implementation activities to agency mission priorities and by obtaining agency buy-in for actions required to transform the NAS. In addition, we have reported that FAA’s mechanisms for collaborating on research and technology development efforts with DOD and DHS do not ensure that resources are fully leveraged. For example, FAA and DOD have yet to fully identify what DOD research, technology, or expertise could support NextGen activities. DOD has not completed an inventory of its research and development portfolio related to NextGen, impeding FAA’s ability to identify and leverage potentially useful research, technology, or expertise from DOD. In addition, DHS’s collaboration with FAA and its NextGen planning unit, the Joint Planning and Development Office has been limited in certain areas of NextGen research, and the agencies have yet to fully determine what can be leveraged. Lack of coordination between FAA and DOD and FAA and DHS could result in duplicative research and inefficient use of resources at both agencies. We previously recommended that these agencies develop mechanisms to further clarify NextGen interagency collaborative priorities and enhance technology transfer between the agencies. Chairman Mica, Ranking Member Rahall, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time. For further information on this testimony, please contact Gerald L. Dillingham, Ph.D., at (202) 512-2834 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Andrew Von Ah (Assistant Director), Kevin Egan, Elizabeth Eisenstadt, Richard Hung, Bert Japikse, Kieran McCarthy, and Jessica Wintfeld. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses the use of test facilities as a means of leveraging public, private, and academic resources to deliver technologies for the Next Generation Air Transportation System (NextGen). NextGen will affect nearly every aspect of air transportation and will transform the way in which the air transportation system operates today. It is a complex undertaking that requires new technologies--including new integrated ground and aircraft systems--as well as new procedures, processes, and supporting infrastructure. The result will be an air transportation system that relies on satellite-based surveillance and navigation, data communications, and improved collaborative decision making. Transforming the nation's air transportation system affects and involves the activities and missions of several federal agencies, though the Federal Aviation Administration (FAA) is the lead implementer. In addition, NextGen was designed and planned to be developed in collaboration with aviation stakeholders--airlines and other airspace users, air traffic controllers, and avionics, aircraft, and automation systems manufacturers--in order to facilitate coordinated research activities, transfer technologies from FAA and partner agencies to the private sector, and take advantage of research and technology developed by the private sector that could meet NextGen needs, as appropriate. Three NextGen test facilities, collectively referred to as the NextGen Test Bed, are designed to foster the research and development of NextGen-related technologies and to evaluate integrated technologies and procedures for nationwide NextGen deployment. These test facilities provide access to the systems currently used in the national air space (NAS) and house various types of hardware, simulators, and other equipment to allow for demonstrations of new technologies. They also provide opportunities for stakeholders--public and private--to collaborate with FAA, academia, and each other. This statement today discusses (1) the role of the NextGen test facilities in the development of NextGen technologies and how private industry and partner agencies participate in projects at the NextGen test facilities, and (2) our previous findings on NextGen technology transfer and FAA's efforts to improve the transfer and implementation of NextGen-related technologies. This statement is based on our prior NextGen-related reports and testimonies, updated with information we gathered from FAA and test facility officials in October 2011. The GAO reports cited in this statement contain more detailed explanations of the methods used to conduct our work, which we performed in accordance with generally accepted government auditing standards. The role of the NextGen Test Bed is to demonstrate the benefits of NextGen initiatives and to do so early in the technology development process. While sharing a common purpose, each of the three facilities that collectively make up the NextGen Test Bed offers different testing capabilities and brings together different participants from different communities. Across the test facilities private and public sector stakeholders contribute personnel, equipment, and funding to develop and integrate technologies. Linking the test facilities to leverage the benefits of each is part of the NextGen Test Bed concept and officials from the test facilities indicated they have made some progress in doing so. In prior work on technology transfer activities, we found that the success of test facilities as a means to leverage private sector resources depends in large part on the extent to which the private sector perceives benefits to its participation. Similarly, collaboration among the NextGen partner agencies depends in part on their seeing outcomes that further their mission and on identifying a common purpose. FAA has taken a number of actions to improve its ability to implement new technologies and increase partner agencies' and private sector participants' involvement in seeing the development of selected technologies through to successful implementation--including restructuring the organization responsible for implementing NextGen and linking the test facilities and improving their capabilities.
To determine the feasibility and utility of implementing a requirement that each nonimmigrant alien annually report a current address, we reviewed available documents concerning nonimmigrant alien address reporting requirements and interviewed headquarters officials from USCIS and ICE. At USCIS headquarters, we interviewed senior officials who were responsible for alien records management and benefit administration. At ICE headquarters, we interviewed two senior officials who were responsible for ICE compliance enforcement activities related to aliens. We also interviewed 15 ICE Assistant Special-Agents-in-Charge, supervisors, and special agents who are responsible for immigration enforcement activities in their Detroit, Michigan; Houston, Texas; Los Angeles, California; Miami, Florida; and New York, New York field offices. These offices, according to DHS data, are located in geographic regions where almost half of nonimmigrants likely to be subject to an annual address reporting requirement reside. The results of our interviews with agents in these five field offices may not be representative of the views and opinions of those in other field offices nationwide. We also interviewed an official from the Federal Bureau of Investigation’s (FBI) Foreign Terrorist Tracking Task Force (FTTTF). USCIS’s ORS staff provided cost estimates for existing change of address processing costs and for an annual nonimmigrant alien address reporting requirement. We attempted to obtain supporting explanations and documentation to verify these estimates, but were not provided information on all. On the basis of our efforts to determine the reliability of the estimates for which supporting information was provided, which included verifying calculations and bringing any discrepancies we found to their attention, we believe that they are sufficiently reliable for the purposes of this report. We did not use cost estimates for which supporting information was not provided. Through initial registration and change of address notifications, all aliens are to provide their identity and an address where they can be located while in the United States. USCIS receives and maintains alien address information for benefits administration and immigration law enforcement, and can share this information to help other law enforcement agencies identify and locate aliens for national security purposes. Generally, nonimmigrant aliens provide their identity and address information at the time of their entry and during their stay in the United States using 1 of 12 different forms. For example, nonimmigrant aliens arriving in the United States are generally required to complete the two- part Arrival and Departure Record (Form I-94). The first part records nonimmigrant aliens’ arrivals and includes the nonimmigrant alien’s address in the United States. The second part is to be surrendered when nonimmigrant aliens leave the country. DHS is to match the first and second parts of the Form I-94 to identify those nonimmigrant aliens who have left the country. However, as we reported in May 2004 and DOJ’s Inspector General reported in 1997 and 2002, legacy INS lacked many Form I-94 departure records, and as a result, INS could not identify all of the nonimmigrant aliens who had left the country. Over the years, Congress established various requirements for immigrant and nonimmigrant aliens to report their addresses while residing in the United States. Currently, aliens are generally required to report their change of address to USCIS within 10 days of moving. Failure to report a change of address can result in an alien being taken into custody and placed in removal proceedings before an immigration judge. The alien can be fined, imprisoned for not more than 30 days, or removed. Because legacy INS did not inform aliens of change of address notification requirements when they entered the country, in our November 2002 report, we recommended that legacy INS publicize change of address notification requirements nationwide. According to USCIS officials, as of November 2004 this recommendation was not implemented because USCIS was in the process of revising the change of address form used by aliens and did not want to begin publicity efforts until the revised form was finalized. Figure 1 shows the evolution of nonimmigrant alien reporting requirements, beginning with the establishment of the Alien Registration Act of 1940 to the present, with the 1981 amendments marking when annual reporting requirements of nonimmigrant aliens were repealed. Although Congress, in 1981, eliminated the requirements that all aliens annually report their addresses and that nonimmigrants report their address every 90 days, Congress, through various acts, reinforced the importance of the government being able to identify the lawful entry of nonimmigrants into the United States. Specifically, prior to the terrorist attacks of September 11, 2001, Congress mandated that INS improve its ability to identify nonimmigrant aliens who arrive and depart the United States and who overstay their visas. The Illegal Immigration Reform and Immigrant Responsibility Act of 1996, for example, authorized the Attorney General to establish an electronic student tracking system to verify and monitor the foreign student and exchange visitor information program and develop an entry and exit control system to collect arrival and departure records for every alien entering and leaving the United States. After September 11, 2001, the USA PATRIOT Act, enacted in October 2001, re-emphasized the speedy implementation of an entry-exit system for U.S. visitors. In August 2002, DOJ issued a rule that became effective September 11, 2002, concerning the registration and monitoring of certain nonimmigrants. Under this rule, DOJ imposed special requirements on nonimmigrants from designated countries. For nonimmigrant aliens arriving in the United States, these requirements included being fingerprinted and photographed at the port of entry. The rule also required nonimmigrant aliens to reregister after 30 days and annually. In December 2003, DHS issued an interim rule suspending the 30-day and annual reregistration requirements that were in effect prior to that date. DHS determined that its United States Visitor and Immigrant Status Indicator Technology (US-VISIT) program and other new processes being implemented would meet the national security needs. Consistent with the above registration requirements, US-VISIT is part of the U.S. security measures for all visitors (with limited exemptions) holding nonimmigrant visas, regardless of country of origin. Specifically, US-VISIT’s program objectives include (1) collecting, maintaining, and sharing information (including address data) on aliens who enter and exit the United States; (2) identifying aliens who have violated the terms of their visit; and (3) detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics. USCIS officials told us that it would be technically feasible to implement an annual nonimmigrant alien address reporting requirement. The officials said that the current NIIS system for processing alien change of address forms could be upgraded to facilitate the nearly fourfold increase in processing volume that likely could result from implementing an annual nonimmigrant alien address reporting requirement. USCIS currently processes about 550,000 nonimmigrant change of address forms each year, and the officials estimated that about 2.6 million nonimmigrants could be required to report under an annual requirement. The officials estimated that an increase of over 2 million address reporting forms would increase USCIS’s current annual change of address form processing costs from about $1.6 million to at least $4.6 million per year. The estimate of the cost increase includes computer operations and maintenance, printing of address reporting forms, and additional data entry staff. USCIS’s estimate does not include the potentially substantial cost of enforcing the address reporting requirement, which would include hiring, training, and compensating additional ICE agents. USCIS is considering incorporating the current NIIS change of address system into the US-VISIT program. On October 4, 2004, officials from USCIS formally requested that DHS move responsibility for all alien change of address registration from USCIS’s ORS to the US-VISIT program. Although US-VISIT officials told us that placement of nonimmigrant change of address information responsibility within US-VISIT might be a viable option, the program is not currently designed to monitor aliens during their stay in the United States. These officials told us that incorporating address change data into the US-VISIT program would require a change in US-VISIT program requirements, including changes in US-VISIT’s budget and technical requirements. As of November 2004, DHS officials had not made a decision whether to integrate address change data into US-VISIT. Sixteen of the 17 ICE agents we contacted in headquarters and in the field said that implementing an annual nonimmigrant alien address reporting requirement would have limited utility in assisting them in locating nonimmigrant aliens because the annual registration is based on self- reported information. ICE agents in Houston, Texas; Los Angeles, California; Miami, Florida; and New York, New York, responsible for immigration enforcement activities said that when conducting investigations, they do not use the NIIS change of address data currently submitted by nonimmigrants to help locate nonimmigrants as part of their investigations. They said that because the change of address information is self-reported data, it is often less reliable than data from other databases. According to the agents, nonimmigrants who intentionally are not in compliance with immigration or other laws or otherwise do not want to be contacted by the government are not likely to accurately self-report their address information to DHS under an annual requirement. However, these nonimmigrants might be found using non-DHS information systems. Nonimmigrant aliens who comply with address reporting requirements or seek DHS benefits might be found using existing DHS systems or other information sources. Still other nonimmigrants who may not be aware of address reporting requirements or forget to file might also be found using other existing systems. ICE agents said that they consider the data found in existing public source database systems such as department of motor vehicle records, credit bureaus, court filings, and Internet search engines that compile address and other information to be more current and reliable than self-reported change of address data housed in NIIS. Typically, nonimmigrants are located through public source databases because they have been involved in financial transactions, have driver’s licenses, and may participate in other activities (e.g., submitting an application to rent an apartment) resulting in information that can be tracked by investigators. For example, a nonimmigrant alien applying for credit from a financial institution is required and has an incentive to provide accurate address information. Because of data-sharing arrangements among financial institutions and credit bureaus, the address information provided by the nonimmigrant alien to the financial institution also is available through other public source databases, according to the agents. In some cases, nonimmigrant aliens find it to their advantage to keep DHS apprised of any address changes. According to agents we contacted, nonimmigrant aliens must provide correct and current address information to USCIS to request benefits such as a change in immigration status from visitor to student or from nonimmigrant to permanent resident status. The DHS databases that house address information on nonimmigrants seeking benefits are of some use for finding accurate address data. For example, USCIS uses the Computer-Linked Application Information Management System (CLAIMS) system to process requests for immigration benefits and enters updated address information into CLAIMS. The agents said that, consequently, they rely on CLAIMS as one source of nonimmigrant address information within DHS. However, address information in CLAIMS and NIIS are not linked in a manner such that an address change in one would update an address in the other database. In our November 2002 report, we recommended that legacy INS remedy this type of problem by ensuring that alien address information in all DHS databases is consistent and reliable. As of November 2004, this recommendation had not been implemented. Although almost all of the 17 agents we interviewed stated that self- reported nonimmigrant alien addresses would not be helpful in locating nonimmigrants, several agents described some possible benefits associated with an annual nonimmigrant alien address reporting requirement: According to one Los Angeles ICE agent, implementing an annual nonimmigrant alien address requirement could be useful if biometric data (e.g., fingerprints and digitized photographs) were included with forms during the reporting process so that nonimmigrant alien registration forms could be traced to other DHS forms, such as visas, and also linked to a biometric identifier. The agent stated that linking nonimmigrant address information with biometric data included with forms, rather than with names entered on reporting forms, would assist in ensuring accuracy of the address information. However, current alien address notification plans do not address the potential costs or the feasibility of implementing such a biometric approach or any reengineering required to link any biometric indicators gathered by US-VISIT to alien address systems. The FBI’s FTTTF official we interviewed stated that an annual nonimmigrant alien address reporting requirement could provide a useful list of nonimmigrants the task force could refer to during investigations of potential terrorists. If nonimmigrant aliens were required to report their current address annually and within a specified time period (for example, between January 1 and 15 of each year), the annual reporting requirement could allow federal investigators to refer to a list of nonimmigrants reported to be within the United States on the date the form was submitted to DHS. Federal investigators would, consequently, be able to use the annual address report as a source of data, supplemental to other sources of address information, according to the official. It is important to note that address information entered by nonimmigrants on the I-94 entrance form or US-VISIT information, coupled with compliance with the current change of address notification requirements, would provide this information, making an annual registration requirement redundant, assuming the alien provides accurate address information. Agents in ICE’s Detroit, Michigan, and Houston, Texas, field offices and one ICE headquarters official told us that violation of an annual address registration requirement could be used to allow ICE, in the absence of other charges, to temporarily detain nonimmigrant aliens to allow for questioning regarding other potential crimes. However, as we reported in November 2002, violation of the current address notification requirements by aliens also provides a basis for temporary detention and questioning, but historically legacy INS infrequently enforced address reporting requirements. While implementing an annual registration requirement for nonimmigrants is feasible, the consensus of the USCIS and ICE headquarters officials and ICE field office agents we contacted recognized that a self-reporting system would be of limited use in locating the group of aliens who are not in compliance with immigration laws or otherwise do not want to be contacted by the government. Nonimmigrant aliens who do not wish to be located are not likely to comply with an annual requirement to self-report address information. Consequently, agents have used other databases to locate this class of alien and have found such databases to be more current and reliable than the existing self-reporting system. Potential benefits cited by law enforcement agents, such as the ability to verify that the nonimmigrant alien is still in the country and to provide a basis for detaining noncompliant aliens, might be available with current systems and law but have seldom been used. For these reasons, it is questionable whether the usefulness of an annual reporting requirement would outweigh the cost of implementation and enforcement. We requested comments on a draft of this report from the Secretary of Homeland Security. DHS reviewed a draft of this report and had technical comments, which we incorporated as appropriate. We are sending copies of this letter to the Secretary of the Department of Homeland Security and interested congressional committees. We will also make copies available to others upon request. In addition, the letter will be available at no charge on the GAO Web site at http://wwww.gao.gov. If you or your staffs have any questions about this letter, please contact Darryl W. Dutton at (213) 830-1086 or me at (202) 512-8777. Key contributors to this letter are Samuel Van Wagner, Ben Atwater, Grace Coleman, Nancy Finley, and David Alexander.
Since 1940, Congress has provided a statutory framework that requires aliens entering or residing in the United States to provide address information. By 1981, aliens who remain in the United States for 30 days or more were required to initially register and report their address information and then to report their change of address only if they move. In the months immediately following the terrorist attacks on September 11, 2001, federal investigators' efforts to locate and interview nearly one-half of the 4,112 nonimmigrant aliens they attempted to contact were impeded by lack of current address information. Nonimmigrant aliens are defined as those who seek temporary entry into the United States for a specific purpose, including those aliens who are in the country as students, international representatives, or temporary workers, or for business or pleasure. Because of growing concern over the government's need to locate aliens, the Enhanced Border Security and Visa Entry Reform Act of 2002 directed GAO to study the feasibility and the utility of a requirement that each nonimmigrant alien in the United States self-report a current address on a yearly basis. Department of Homeland Security (DHS) officials told us that while implementing an annual address reporting requirement for nonimmigrant aliens is technically feasible, such a requirement would increase the number of reporting forms DHS would have to process. In turn, this increase would raise form-processing costs from an estimated $1.6 million to at least an estimated $4.6 million per year, according to DHS, which does not include the cost of enforcing the annual reporting requirement. The consensus of U.S. Immigration and Customs Enforcement agents, who investigate activities that may violate immigration law, was that a self-reporting system would be of limited use in locating aliens who are avoiding contact with the government. Nonimmigrant aliens who do not wish to be located are not likely to comply with an annual requirement to self-report address information. Consequently, agents use other databases to locate this class of alien as well as nonimmigrant aliens who may not be aware of address reporting requirements. Public and private databases that record information concerning benefits, an alien's department of motor vehicle records, or credit bureau information are examples of information sources that agents have used to locate nonimmigrant aliens. Despite the unreliability of self-reported information, some agents did recognize the possibility of limited enforcement benefits for implementing an annual address reporting requirement, such as verifying that compliant nonimmigrant aliens are still in the country and providing a basis for detaining noncompliant nonimmigrant aliens. However, existing systems are available for compliant nonimmigrant aliens to notify DHS of address changes. Also, DHS already has the authority to detain all aliens not in compliance with current change of address reporting requirements but has seldom used the authority. Consequently, it is questionable whether the usefulness of an annual reporting requirement would outweigh the cost of implementation and enforcement. DHS reviewed a draft of this report and had technical comments, which we incorporated as appropriate.
Title V of the 1992 reauthorization of the Juvenile Justice and Delinquency Prevention Act of 1974 authorizes OJJDP to award incentive grant funds to the states, which in turn are to award subgrants to units of general local government to support local juvenile delinquency prevention projects. Congress appropriated $13 million in fiscal year 1994 and $20 million in fiscal year 1995 for these purposes. Title V grant funds are to serve as stimuli for local governments to mobilize support from community leaders, develop multiyear prevention plans, and pool public and private resources in implementing programs designed to reduce the future incidence of delinquent behavior and youth crime through adoption of effective strategies that address risk factors for delinquency. To be eligible for Title V funds, the grantees are to provide a 50-percent match of the grant amount, including in-kind contributions (e.g., lease of office space or equipment paid by local government or private sources) to fund the activity. OJJDP also administers other programs such as the Title II Formula Grant Program. Title II of the 1974 act provides grants-in-aid to states and local governments to improve juvenile justice systems and to prevent and control delinquency. To receive and remain eligible for funds under Title V, jurisdictions must be in compliance with Title II formula grant program core requirements. The four key core requirements are (1) not detaining status offenders or nonoffenders (e.g., neglected children) in secure detention or correctional facilities, (2) not detaining or confining juveniles in any institution where they have contact with adult detainees, (3) not detaining or confining juveniles in adult jails or lockups, and (4) demonstration of efforts to reduce the disproportionate confinement of minority youth where it exists. According to the Department of Justice Delinquency Prevention Program guidelines, approximately 70 percent of the jurisdictions at one time or another have devoted 100 percent of available Title II formula grant funds toward meeting the four core requirements. As a result, many jurisdictions have been limited in the amounts of OJJDP Title II funds that they could devote to delinquency prevention. Since the Title V program started, OJJDP has issued two annual reports to Congress. Its 1994 report (1) highlighted activities and accomplishments during the first year of Title V implementation, (2) described efforts to foster interagency coordination of delinquency prevention activities, and (3) contained recommendations for future Title V activities. The 1995 report (1) described efforts to set the foundation for the success of Title V by capacity building (e.g., providing training and technical assistance) and establishing coordination and collaboration within Justice, between federal agencies, and at state and local levels; (2) identified early indications of success; and (3) provided conclusions on past and future contributions of Title V. According to OJJDP, the Title V Delinquency Prevention Program has been implemented on the basis of local adoption of “risk-focused prevention” strategies such as those identified in the social development prevention model, Communities That Care (CTC), developed by J. David Hawkins and Richard F. Catalano, Jr. of the University of Washington in Seattle. OJJDP guidelines call for jurisdictions and localities to consider this model, or comparable risk-focused prevention approaches, by (1) identifying risk factors known to be associated with delinquent behavior operating within communities, (2) assessing those protective factors that buffer the effect of the identified risk factors, and (3) targeting program interventions to occur at the earliest appropriate stage in a child’s development and within the local community. The CTC model defines five categories of risk factors that have been found to be predictive of juvenile delinquency: individual characteristics, such as alienation, rebelliousness, and lack of bonding to society; family influences, such as parental conflict, child abuse, poor family management practices, and history of problem behavior affecting the family (e.g., substance abuse, criminality, teen pregnancy, and dropping out of school); school experiences, such as early academic failure and lack of peer group influences, such as friends who engage in problem behavior (minor criminality, drugs, gangs, and violence); and community and neighborhood factors, such as economic deprivation, high rates of substance abuse and crime, and neighborhood disorganization. According to the CTC model, protective factors must be introduced to counter these risk factors. Protective factors are qualities or conditions that moderate a juvenile’s exposure to risk. Protective factors fall into three basic categories: (1) individual characteristics, such as a resilient temperament and a positive social orientation; (2) bonding with prosocial family members, teachers, adults, and friends; and (3) healthy beliefs and clear standards for behavior. Risk-focused delinquency prevention is intended to provide communities with a conceptual framework for (1) identifying and prioritizing risk factors, (2) assessing how current resources are being used, (3) identifying needed resources, and (4) choosing specific programs and strategies that directly address risk factors through the enhancement of protective factors. According to state and local officials, this approach requires a commitment by, and participation of, the entire community in developing and implementing a comprehensive strategy for preventing delinquency. The Title V program is implemented in two phases. During phase one, the assessment and planning phase, communities interested in participating in the Title V Program must form a local prevention policy board (PPB) and conduct an assessment to identify and prioritize the risk factors operating within their community. On the basis of the risk factor assessment, the applicant community then must develop a comprehensive 3-year delinquency prevention plan that outlines specific programs and services to be implemented. This plan serves as the substantive basis for the community’s application to the state’s juvenile justice advisory group, or its designated administrative agency, for Title V funding. The programs and services to be implemented must be designed to reduce the impact of identified risk factors on children living in the applicant community. Phase two of the Title V process involves the implementation, monitoring, and evaluation of the programs and services specified during phase one, as well as the ongoing coordination of services within the applicant communities. (See app. II for an example that illustrates this process.) According to OJJDP, 49 of the 50 states, the District of Columbia, and 4 of 6 U.S. territories applied for and received Title V incentive grant funds administered by OJJDP. One state, Wyoming, opted out of the program.According to the Comptroller, Office of Justice Programs within Justice, as of March 29, 1996, OJJDP had awarded $29.6 million in Title V funds to the jurisdictions in fiscal years 1994 and 1995. Juvenile justice officials responding to our survey reported receiving 796 applications for subgrants of Title V funds during 1994 and 1995. As of December 31, 1995, they had awarded subgrants to 332 units of general local government. Table 1 shows the number of local governments applying for Title V subgrants and the number receiving subgrant awards in calendar years 1994 and 1995 for jurisdictions responding to our survey. Some of the juvenile justice officials reported that 286 of the 796 subgrant applications were rejected, denied, or otherwise turned away in 1994 and 1995, specifically because of a lack of available Title V funds. Another 178 applications were not approved for other reasons. During calendar years 1994 and 1995, 45 jurisdictions reported awarding about $18.9 million of Title V funds to units of general local government. This represents 64 percent of the $29.6 million in Title V funds awarded to jurisdictions for fiscal years 1994 and 1995. These 332 subgrant awards provide partial support for 277 local juvenile delinquency prevention projects. In addition, OJJDP awarded $1 million for six grants under its Safe Futures Program. The Safe Futures Program provided fiscal year 1995 funds from nine program areas, including Title V, to fund comprehensive continuum of care programs in urban, rural, and Native American jurisdictions. About 77 percent (or 213) of the 277 projects that had received subgrant awards through December 31, 1995, were in their first year and 23 percent (or 64) were receiving their second year of Title V funds. Juvenile justice officials reported that 197 of the 277 local prevention projects (about 71 percent) were active and had spent about $3.6 million (or about 19 percent) of the Title V funds awarded in subgrants as of December 31, 1995. (See app. III for additional information on the number and dollar amount of Title V subgrant awards and expenditures, by jurisdiction.) The responding jurisdictions reported that the $18.9 million in federal funds awarded to localities were matched by an estimated $17.2 million in cash and in-kind contributions from jurisdictions, local governments, and nongovernmental sources; these matching funds were considerably more than the minimum of 50 percent of the federal share required by the act. Figure 1 illustrates the amount and relative proportion of matching funds reported by type and source. 3% State government in-kind ($472,119) Local government cash ($6,144,137) Local government in-kind ($4,926,535) Total funding for 277 local delinquency prevention projects (federal and matching shares) was $36.2 million over the 2-year period. (See app. IV for additional information on amounts and sources of matching funds, by jurisdiction.) Twenty-nine of the 51 responding jurisdictions administering Title V program activities reported that they retained about $900,000 (3 percent) of the $29.6 million in Title V funds awarded to them by OJJDP in fiscal years 1994 and 1995. As shown in figure 2, jurisdictions reported using these funds for such purposes as supporting program administration and management of Title V activities and providing technical assistance and training to local governments. (See app. V for additional information, by jurisdiction, on uses of Title V funds retained by juvenile justice agencies.) According to our survey, respondents provided the following information regarding the focus of Title V projects. In addition, our visits to six local projects also provided perspectives on the projects. About three-fourths (209) of the 277 projects reportedly emphasized both primary and secondary prevention in addressing multiple sets of risk factors, in 3 or more problem areas, such as community, schools, peers, family, and individuals. About 90 percent (250) of the 277 projects reportedly employed 2 or more different “Promising Approaches” strategies, advocated by the CTC model. About 74 percent of the projects employed two or more other program approaches or intervention strategies and 88 percent of the projects used two or more program methodologies. About 58 percent (161) of the 277 projects reportedly focused solely on providing services to clients, such as youth or parents; and 36 percent (100) aimed at both changing organizations, agencies, rules, settings, institutions, or established practices, and providing services to clients (see fig. 3). Focus only on providing services to clients 4% Focus only on systems changes (e.g., changing organizations) Youth in early to middle adolescence (ages 12 through 16) were the primary target group reportedly addressed in over two-thirds of Title V projects; over half (54 percent) of the projects were reportedly addressed to elementary school age children (ages 5 through 11). Eighty-four percent (231) of the 276 projects for which data were available reportedly addressed delinquency problems in the general community, such as high rates of residential mobility, community social disorganization, low levels of attachment to the neighborhood, norms that favored adoption of delinquent or criminal values, and extreme economic deprivation. About 70 percent (194) of the 276 projects reportedly addressed problems in school settings, such as early and persistent antisocial behavior, academic failure, and lack of commitment to school. Sixty-eight percent (187) of the 276 projects reportedly addressed problems in the family domain such as poor management of interpersonal relationships among family members and others, family conflict, history of problem behavior from generation to generation, and parental attitudes favoring involvement in problem behaviors that can lead to delinquency and possibly a career of crime. About 45 percent (125) of the 276 projects reportedly addressed problems associated with peers and peer groups, such as friends who engage in delinquent behavior. Sixty-five percent (178) of the 276 projects reportedly addressed problems that are exhibited by individuals, such as alienation and rebelliousness, development of favorable attitudes toward misconduct, and early introduction or initiation of behavior problems. The objectives of these 276 projects reportedly addressed various delinquency risk factors in targeting program intervention strategies and methods. Four of the 17 juvenile delinquency risk factors we identified in our survey were rated as both significant and of high priority to the community by over one-half of the 276 local juvenile delinquency projects. These include (1) family management problems (66 percent of the projects), (2) availability of drugs (58 percent of the projects), (3) academic failure (52 percent of the projects), and (4) friends who engage in problem behaviors (51 percent of the projects). Nine additional risk factors were identified as significant and of high priority by local PPBs in over one-third of local delinquency prevention projects and include, in descending order of frequency (1) family conflict, (2) high incidence/ prevalence of violations of community laws and norms, (3) low neighborhood attachment and high levels of community disorganization, (4) parental attitudes and conduct that favor involvement in delinquent or problem behaviors, (5) early and persistent antisocial behavior, (6) extreme economic deprivation, (7) alienation and rebelliousness, (8) attitudes favorable toward or condoning problem behavior, and (9) a family history of problem behavior. Local delinquency prevention projects were reportedly using a wide variety of program intervention strategies and prevention methods. About 90 percent (or 250) of 277 projects reportedly utilized 2 or more “Promising Approaches” advocated by the CTC model. Community mobilization, parent training, and after-school programs were the 3 most frequently cited of the 16 CTC-advocated strategies, closely followed by community/school policies, family therapy, school behavior management strategies, and mentoring with behavioral management. Two-thirds of the 266 projects reportedly employed community-based outreach services to involve and work with parents, families, and juveniles and stress programs for positive youth development that assist at-risk youth. About 50 percent of the projects reportedly embraced comprehensive programs that meet the needs of youth through the collaboration of local youth/family service systems. The following methods were used by at least one-third of the projects reporting on their local delinquency prevention programming—parent training, school-based education, drug and alcohol abuse prevention, family and peer counseling, outreach, recreation, and services coordination. During site visits made to six local delinquency projects in three states we found that each project addressed itself to an array of delinquency risk factors, protective factors, and program strategies designed to prevent juvenile delinquency in their communities. A brief description of each of the six projects we visited is provided in appendix VI. Twenty-six jurisdictions that had awarded subgrants reported that 83 of the 332 subgrantees (about 25 percent) receiving Title V funds also received $6.1 million in Title II formula grant money in fiscal years 1994 and 1995. About one-half of the 51 jurisdictions responded that the availability of federal funds under Title V encouraged localities in their jurisdictions to comply with the Title II core requirements to qualify for and receive subgrants under Title V. About three-fourths of the 51 jurisdictions indicated Title V incentive grant activities were supportive to a great or very great extent of the overall goals of the formula grant programs in their jurisdictions. About two-thirds of the 51 jurisdictions reported that the requirement to comply with Title II core requirements to be eligible to receive Title V funding was not a barrier to local government participation in Title V program activities. Thirty percent said that compliance requirements were a barrier to participation and the remaining 4 percent said that they did not know. Of the 51 jurisdictions responding to our survey, 7 reported rejecting, denying, or turning away 23 subgrant applications for Title V funding in 1994 and 1995 because the units of general local government applying for the subgrants were not in compliance with Title II core requirements. This number (23) represents about 3 percent of the 796 local governments that applied for local delinquency prevention subgrants under Title V. Officials in 19 jurisdictions reported that $319 million in state funds were devoted to support delinquency prevention activities in 1995, in addition to that allocated and committed as matching funds in support of Title V projects. But the majority of state juvenile justice officials (31) reported they did not know how much state money was devoted to support delinquency prevention in their jurisdictions. Only a few jurisdictions provided information on amounts of other money provided in 1995 by local governments, not-for-profit and charitable organizations, for-profit businesses, or other nongovernmental organizations which was devoted to support juvenile delinquency prevention activities. (See table 2.) A summary of preliminary information OJJDP received from nine other federal agencies indicates that approximately $4.3 billion was spent to support juvenile delinquency related prevention, juvenile justice, or youth-related programs and activities in fiscal year 1995. For example, the National Institutes of Health reported spending approximately $54 million (0.6 percent of its $9.1 billion budget for grants-in-aid) on delinquency prevention activities. OJJDP pointed out that, while some agencies provided detailed information regarding the levels of funds spent directly on youth-related programs and activities, others were not able to break out the amount of funds directly spent for such purposes. For example, the Department of Labor indicated that it spent $3.5 billion in youth-related programs (including the Summer Youth Employment Program, School-to-Work Program, and Job Corps), which would account for approximately 40 percent of Labor’s total agency budget in fiscal year 1992. On July 10, 1996, we met with Department of Justice officials, including the Deputy Administrator of OJJDP. The officials agreed with the material in the report. Their comments have been incorporated where appropriate. We are sending copies of this report to the Attorney General; Administrator, OJJDP; Director, Office of Management and Budget; and other interested parties. Copies also will be made available to others upon request. The major contributors to this report are listed in appendix VII. Should you need additional information or have questions about this report, please contact me on (202) 512-8777. The 1992 reauthorizarion of the Juvenile Justice and Delinquency Prevention Act of 1994 (42 U.S.C. 5781 note) requires us to prepare and submit to Congress a study of the Title V program. On the basis of discussions with your offices we agreed to provide information on the status of the Title V program, including a description of the types of projects for which incentive grant funds are being used. Specifically, we agreed to determine (1) which states and how many units of local government applied for and received Title V incentive grant funds; (2) how much fiscal years 1994 and 1995 grant money had been awarded and how much had been spent as of December 31, 1995; (3) the sources and amounts of matching funds committed to local delinquency prevention projects; (4) what Title V funds were used for; (5) whether Title II eligibility requirements have affected Title V participation; and (6) what funding, other than Title V, was provided to support local delinquency prevention activities. To answer these questions, we collected descriptive data and other information using a structured data collection instrument in a nationwide survey of the 50 states, the District of Columbia, and 5 U.S. territories; discussed the status of program implementation with OJJDP officials and selected state and local juvenile justice officials; and conducted site visits with 6 local delinquency prevention projects in 3 states to observe how some Title V funds are being used. We selected the states we visited because they were in the same geographic region as Washington, D.C., and had relatively large amounts of Title V funding as compared to other states. We selected the specific projects to visit on the basis of discussions with state and OJJDP officials. Specifically, we focused on projects that were (1) active, (2) reported by state officials to be representative of Title V projects, and (3) diverse in their goals and objectives. In developing the survey, we discussed the questions with state juvenile justice specialists. Our survey asked state juvenile justice specialists to identify the types of local delinquency prevention projects being supported by subgrant awards. State juvenile justice officials used checklists of information categories that we developed to provide descriptive information for each local delinquency prevention project supported by a subgrant award in their state in calendar years 1994 or 1995. After receiving their responses, we conducted edit checks of key responses for completeness. When necessary, we contacted respondents to resolve any apparent inconsistencies. In addition, we compared the total funding they reported for Title V with that provided by OJJDP to ensure completeness and consistency of survey responses. We did not receive responses from the Commonwealth of Puerto Rico, the U.S. Virgin Islands, Guam, and the Northern Mariana Islands. We did our work between October 1995 and June 1996 in accordance with generally accepted government auditing standards. Blair County’s project is a countywide comprehensive juvenile delinquency prevention program using the CTC model. Blair County’s process illustrated how a local community developed a comprehensive plan and implemented a multifaceted delinquency prevention project as a collaborative communitywide effort. Blair County officials contacted the Pennsylvania Commission on Crime and Delinquency to inform state officials of the county’s interest in delinquency prevention. Blair County officials sent a team led by a judge and a county commissioner to the OJJDP-sponsored Key Leader Orientation training session on the Title V Program held in June 1994. Other team members included a county school superintendent, the county human services director, and the county’s chief juvenile probation officer. The county adopted the risk-focused approach using the CTC model. In July 1994, the Board of Blair County’s Family Resource Center accepted responsibility for developing and overseeing what would become the Title V project—the “Blair County Comprehensive Juvenile Delinquency Prevention Program.” A local PPB was formed to further develop and steer the program. The PPB was composed of the staff of the Family Resource Center and has since expanded to 32 members. Percent of allocated funds spent through 12/31/95 (continued) Technical assistance to local government (continued) We developed the following descriptions of six projects from visits to local project sites, interviews with project staff, observations of project activities, discussions with county and state officials, and documentation and comments they provided. We did not verify the information provided. Blair County, Pennsylvania, adopted a risk-focused approach based on the CTC model in designing and implementing a comprehensive program to address factors that lead or contribute to juvenile delinquency and crime. Risk Factors: Blair County assigned the highest priority and significance to the need to deal with (1) extreme economic and social deprivation, (2) family management problems, (3) family conflict, and (4) early and persistent antisocial behaviors. Resources Assessment: A resources analysis commissioned by the PPB identified an array of services and diverse funding streams, but also identified a lack of existing parenting programs to assist families in handling conflict and managing family problems that can lead to delinquency. The assessment also revealed the need for countywide community/media mobilization efforts to generate action in all segments of the community to address serious and increasingly costly delinquency problems. Goals: The overall goal of the project was to increase family and community prosocial bonding and improve standards of behavior among children while reducing risk factors that lead to adolescent problem behaviors. Objectives: Project objectives were established in each of three areas of concern—community, family, and school—to address four sets of risk factors. To address extreme economic and social deprivation, the project promoted collaborative programs and activities to increase opportunities for job readiness, skill development, and positive social bonding to increase the economic and social stability of children and families. This was intended to increase the likelihood that children and adolescents will find positive alternatives to engaging in delinquent activities. Family management problems and family conflict were addressed through increasing the availability and accessibility of parenting programs to improve family members’ abilities to practice effective management techniques, cope with stress, and reduce violent behavior within the family unit and among individual members. Problems associated with early and persistent antisocial behaviors by children were being addressed through a school-based program of training in conflict resolution. This program included the parents and siblings of at-risk elementary school children in order to reduce the incidence of adolescent problem behaviors that can lead to delinquency and crime. Intervention Strategy: Community/media mobilization and parent training. Project Description: Project activities included (1) organizing focus groups, with participants such as county agency officials, to discuss and develop strategies for coordinating county job-readiness and training programs for children, adolescents, and adults; (2) conducting media campaigns to heighten awareness of and involvement in program activities and services; (3) developing and implementing parenting education programs; (4) identifying and supporting early education providers in developing positive behaviors among children; and (5) providing school-based programs of conflict resolution and prosocial skills training. An individual was hired to mobilize the community and the media through speaking engagements to promote the program. His activities included providing community orientations, carrying out a media campaign that promoted the availability of resources for children and families, developing a list of all job/readiness training programs, conducting workshops, and promoting positive community values. Incentives and supports for parenting education programs were being developed, and services were being made available to families identified as being in need. Target Area/Group: Blair County’s program was organized as a countywide comprehensive juvenile delinquency prevention program using the CTC model. Some project activities were targeted at all age groups, while others concentrated on young parents, elementary school children, or adolescents. Project Period: April 1995 to March 31, 1997. The Dauphin County Human Services Department, serving the Greater Harrisburg, Pennsylvania, metropolitan area, contracted with a private nonprofit community development organization, the Community Action Commission, to operate its delinquency prevention project. Dauphin County’s Title V project adopted the CTC model but was part of a larger community development effort. Harrisburg was an “Enterprise Community,” which made it eligible for $3 million in federal Social Services Block Grant funds to support economic initiatives. Risk Factors: Four sets of risk factors were determined to be significant and of highest priority at the onset of this project: (1) low neighborhood attachment and community disorganization, (2) extreme economic and social deprivation, (3) family management problems, and (4) early and persistent antisocial behavior. Resources Assessment: The resources assessment revealed a gap reflecting a lack of resources devoted to community organization and collaborative planning. Goals: Three goals were set: (1) economic empowerment—to encourage healthy beliefs by youth regarding their economic futures, (2) family support—to strengthen internal management capacities of families with young children, and (3) mobilization against violence—to create a nonviolent culture among and around youth and their families. Objectives: Dauphin County established 12 objectives for its Title V project. Five of the 12 objectives were established to reach the first goal of economic empowerment targeting youth ages 10 to 17 and their families. They were to (1) start 3 neighborhood/family owned businesses; (2) create 24 new jobs, with one-half of them for youth; (3) train 100 youth in business development skills; (4) rehabilitate and occupy 3 vacant commercial properties in the target area; and (5) lower the number of street arrests on blocks occupied by new businesses by 50 percent. To achieve the goal of family support, another five objectives were established targeting at-risk families with preschool children, ages birth to 6 years. They included (1) starting 2 new family centers, (2) instituting monthly home visits to provide parenting education for 200 families, (3) creating 2 new family support networks serving 20 families per network, (4) lowering absenteeism from the family among these 200 families by 60 percent, and (5) achieving less than 10-percent retention rates for first graders from network-involved families. The last 2 of the 12 objectives were established to achieve the goal of mobilizing against violence by targeting children ages 6 to 10 and their families. They included lowering suspension rates for fighting in elementary schools by 70 percent and lowering the number of violent incidents at community and youth centers by 70 percent. Intervention Strategy: The core strategy was aimed at creating collaborative planning and coordination of specific programs that address juvenile delinquency risk factors. The community collaborative approach was to leverage other programs and resources to address risk factors operating in target neighborhoods. Community mobilization strategies were used to build and support teams of professionals (education, business, human services, law enforcement); parents; residents; and youth to counteract effects of the four risk factors. Project Description: The project was designed to develop and implement family preservation, violence prevention, and economic training programs. It also sought to promote revitalization in three depressed neighborhoods, strengthen community organization, provide parenting education through satellite family centers, and promote conflict resolution through nonviolent means. This project focused on changing organizations, agencies, rules, settings, institutions, and established practices. The project also delivered services to clients. Economic development initiatives included fostering successful neighborhood-based family/community-owned businesses and cooperatives to complement ongoing job training and business development projects. Economic development and training components targeted youth as potential employees or owners of local businesses, promoted neighborhood economic growth through business development training, and provided technical assistance to small business owners and potential owners. For example, a series of workshops was provided for business owners and potential owners focusing on (1) identifying the economy of the neighborhood, including the in-flow and out-flow of cash and capital; (2) developing marketable business ideas; (3) developing sound business plans; and (4) marketing business concepts to obtain start-up capital. These efforts were envisioned to increase family and youth economic empowerment and result in two to three new businesses within a year. Emphasis was placed on businesses owned by families in the target area and that included young people in helping to plan and be employed by them. Two new family centers were to be established to develop family support networks, sponsor collaborative education workshops, and provide in-home parent education visits. Ineffective family management was to be addressed through development of skills, confidence, support networks, and capacities of at-risk families to enable them to manage their day-to-day lives. These efforts were seen as helping to create protective factors for youth by (1) stabilizing their home lives; (2) helping parents to promote healthy beliefs and standards; and (3) establishing bonds with others (e.g., parents, prosocial peers, and adults) that reinforce healthy modes of behavior. Another aspect of the project, mobilization against youth violence, was designed to counteract the risk of early and persistent antisocial behavior, particularly the growth of violent behavior such as fighting among children in elementary school. It aimed to prevent juvenile delinquency through community organization and youth education activities that teach and reinforce nonviolent means of social interaction and conflict resolution. The project also included an intensive year-round, violence prevention campaign that sponsored conflict resolution seminars and organized recreational and social family nights at youth centers. Other program activities included peer mediation, drug and alcohol abuse prevention, parenting training, the establishment of secondary (satellite) family centers within focus areas, and special activities in each neighborhood. Other available services included family social services, comprehensive case management, job readiness training and interviewing techniques, parenting programs on discipline and drug and alcohol awareness, after-school enrichment programs, and family and youth advocacy and outreach. Title V funds supported a full-time community organizer whose efforts were designed to lead to more effective use of agency resources directed toward target neighborhoods and to attract and effectively use additional funds from public and private sources. Target Area: The target area included Allison Hill, South Allison Hill, and some South Harrisburg neighborhoods. A multisite strategy was employed to ensure that school-age children in the target area were involved in an intensive, high quality course in violence prevention before they were 11 years old. Project Period: July 1995 to June 1997. Montgomery County, Maryland, adopted a schools-based juvenile delinquency prevention program to address the increase in violent behavior on the part of early adolescent youth in three middle schools. This increase in violence was attributed, in part, to the youths’ lack of self-esteem, leadership capabilities, and involvement with school staff and other students. Decreased school attendance; increasing rates of school suspension; and insufficient school, family, and community resources devoted to the critical period immediately following school hours led Montgomery County to expand the community use of schools as a vehicle for mobilizing community support and involvement in the lives of its young people. Risk Factors: Montgomery County assigned the highest priority and significance to the need to deal with (1) lack of commitment to school, (2) early and persistent antisocial behavior, and (3) friends who engage in problem behavior. Resources Assessment: The target community of Wheaton spent 8 years focused on identifying ways to reduce crime, violence, and substance abuse through the Wheaton Neighborhood Network. This was an outgrowth of a Community Partnership Grant from the Federal Center for Substance Abuse Prevention. County officials, school leaders, and heads of justice agencies combined their efforts to attack growing crime and delinquency problems in an area of the county experiencing a rapid influx of immigrant groups and high turnover among families moving into and out of neighborhood schools. Community-oriented policing initiatives under way in Montgomery County combined enforcement and service activities in support of school-based Title V funded activities. Goals: The overall goal of the project was to reduce disproportionate occurrences of antisocial behavior, violent behavior, and substance abuse among middle school students. Objectives: Project objectives were established to (1) increase student and parent participation in school, recreational activities, and related skill-building activities; (2) increase student academic success in school; (3) increase positive relationships among youth and between youth and others in the community; and (4) build networks of support for youth through involvement of community members in youth activities such as mentoring programs. Intervention Strategy: The intervention strategy was to provide more after-school and weekend services for youth and adults; provide leadership training (including conflict resolution, peer mediation, anger management, and parent training); and institute a mentoring program to prevent substance abuse and school disruption and increase school achievement. Project Description: Montgomery County’s Leadership for Violence Prevention Project provided leadership training, peer mediation, and a variety of after-school activities to increase student commitment to school, provide positive role models and experiences in the world of work, and decrease antisocial and delinquent acts in the school and the community. These included a summer prevocational apprenticeship program with business community partners, residential leadership training resulting in student-inspired and student-created action plans for after-school activities, and implementation of after-school enrichment programming at three middle schools in the Wheaton area of Silver Spring, Maryland. After-school programs initiated at all three schools included interscholastic sports; step clubs (emphasizing group-based, team planning); teen talk (to identify and discuss issues important to students); community services activities; and social skills training through drama and role playing. Target Area/Group: The target group encompassed 1,000 sixth graders in 3 middle schools (Parkland, Sligo, and Lee) in the Wheaton area of southeastern Montgomery County, Maryland. Project Period: July 1995 to June 1997. The Chesterfield County, Virginia, Restitution Through Community Service program was intended to reduce recidivism on the part of youth who come into contact with the juvenile court system. Risk Factors: Chesterfield County identified a pattern of events warranting the development of delinquency prevention program activities aimed at young people who have come to the attention of the juvenile justice system. Total juvenile violations have increased 51 percent since 1989. Felony assaults were up 121 percent and weapons violations were up 112 percent between 1989 and 1993. Other factors included increases in the number of (1) child abuse cases (e.g., incidents of inadequate parental supervision increased 93 percent from 1990 to 1993); (2) juveniles placed in residential treatment facilities out of the community; (3) juvenile runaways; (4) school failures (e.g., reading failures and drop-outs); (5) teen suicides, pregnancies, and sexually transmitted diseases; and (6) youth crime (particularly assault, substance abuse, and weapons violations) and the number of juvenile cases petitioned to court. Resources Assessment: Chesterfield County focused on the need to expand the availability of assisted court placement of youthful offenders at work sites throughout the county to perform community service as a condition of their probation. Goals: The goal was to intervene in the lives of young people at the point of their first arrest for delinquent behavior so that they would not commit delinquent acts in the future. Objectives: The Chesterfield County project emphasized the establishment of community service locations to increase participation in restitution and court-ordered community services in conjunction with Virginia’s Comprehensive Services Act. It also supported developing diversion and intervention programs; mobilizing community support for families facing disruptions due to loss and change; engaging the community in providing positive opportunities and role models for delinquent youth; and setting constructive behavioral boundaries for young people on the brink of establishing a pattern of delinquent behavior. Intervention Strategy: Alternatives to traditional handling of first-time young offenders through use of intermediate sanctions, restitution, and community service. The Title V project aimed to increase the number of community service sites. Project Description: The thrust of this project was to utilize community service programs as a form of court-ordered restitution for offenders charged with less serious crimes in addressing individual risk characteristics and to develop individual and community resources. Activities included establishing community service agreements with 50 agencies, developing guidelines for use of community service in lieu of traditional adjudicative dispositions, training agency service site supervisors in techniques for working with youth, creating a site service directory listing task descriptions, identifying characteristics of youth who are more likely to be positively influenced by the program, and placing youth with service agencies. The Title V program had provided service to 95 youth who performed 3,912 hours of community service. Only 5 of these 95 youths had committed another crime, and the crimes were considered minor. As a result of the program, the number of sites in which to place the juveniles had increased from 20 at the beginning of the Title V project to 40 at the time of our visit. Some juveniles have obtained jobs as a result of their community service, while some others have returned to the program as volunteers to assist in implementing project activities. Target Area/Group: This group comprised 250 adjudicated youth 17 years old and younger who lived in Chesterfield County, a large suburban county in the Richmond metropolitan area. Project Period: July 1995 to June 1997. Building a Better Bayside was a school-based program of prevention activities intended to reduce peer conflict, strengthen family management, and reduce substance abuse among students and their families at Bayside High School, Bayside Middle School, and adjacent neighborhood communities in Virginia Beach, Virginia. The recent increase in crime rates, including guns, drugs, and juvenile gang-related activities, focused local officials’ attention on the need for delinquency prevention programming. Conflict among youth from adjacent neighborhoods near the intermediate and high schools in the Bayside school district led to the schools’ selection for both prevention and law enforcement activities. Virginia Beach has linked its efforts in community-oriented policing in support of Title V delinquency prevention efforts in these same neighborhoods as part of its multiagency approach to problem-solving planning. Risk Factors: Virginia Beach has experienced a high rate of teen pregnancy and a dramatic increase in juvenile arrests for serious offenses, including homicide, rape, robbery, aggravated assault, weapons violations, and sex offenses (up 83 percent from 1988 to 1994). Risk assessments conducted under the direction of the PPB identified five sets of risk factors to be addressed: (1) early and persistent antisocial behavior, (2) lack of commitment to school, (3) early initiation of problem behavior, (4) friends who engage in problem behavior, and (5) family management problems. Resources Assessment: The Virginia Beach prevention project drew upon assessments made by the Juvenile Crime Strategies Task Force, which was made up of nearly all the human services and public safety agencies serving the greater Virginia Beach area. The City Council established the Youth Services Coordinating Council recommended by the task force. Resources available through the school system and family services agencies were leveraged through the Title V project. Eleven agencies were participating in the project at the time of our visit. Goals: The project’s goals were to reduce the incidence of juvenile crime and delinquency in the target area by changing attitudes and behavior from violent conflict to those favorable to employing alternative dispute resolution methods as measured by referrals to peer mediation and peer mentoring programs and demonstration of appropriate goal-setting skills. Objectives: Better Bayside’s objectives were to: (1) decrease the incidence of antisocial behavior (such as fighting, disruptive behavior in school, and peer disputes) by referral to peer mediation for conflict resolution; (2) increase acceptance by school faculty and administration of alternative dispute resolution techniques intended to result in increased referrals of potential problems to peer mediation for conflict resolution (e.g., to reduce verbal and physical fights); (3) increase the number of students using peer mentors and peer mentoring contacts as resources for information and support at school to decrease “in-school suspensions”; and (4) increase use of appropriate goal setting skills by students. The objectives are intended to lead to a decrease in court referrals for antisocial behavior among students exposed to peer mediation and conflict resolution training. Intervention Strategy: Peer mediation, peer mentoring, and conflict resolution. Project Description: Building a Better Bayside was an incentive program with the ultimate aim of reducing peer conflict, strengthening family management, and reducing substance abuse in the target area through five prevention activities: (1) peer mediation and confrontation skills training; (2) peer mentoring—training students to be role models for other students; (3) group substance abuse counseling and training for parents and teenagers; (4) goals setting—“Going for the Goal” (a 10-part program that teaches how to set goals); and (5) a CARE Youth Leadership Camp program to promote volunteerism, community conscientiousness, community responsibility, and productivity. Examples of project activities included a summit on teen pregnancy involving 122 participants and a Leadership Camp involving over 200 student campers aged 6 to 13 and counselors from Bayside High School. The Camp focused on building self-esteem and team-building. Sixteen trained teenage mediators were working with other youth and teaching them ways to resolve conflicts without resorting to violence. Target Area/Group: The target area encompassed Bayside Intermediate and High Schools and adjoining neighborhoods and involved youth in early and mid-adolescence and their families. Project Period: April 1995 to March 1997. The Norfolk, Virginia, Effective Prevention Program focused on elementary school age youth experiencing behavioral difficulties or school misconduct that made them candidates for suspension. The program provided alternatives to traditional 1- or 2-day out-of-school suspensions to students from Norfolk’s public schools. Candidates for the program attended either Saturday School, which emphasized a prevention curriculum, or the Alternatives to Violent Behavior Program (AVBP) operated by the James Barry Robinson Center, a nonprofit agency. Risk Factors: Norfolk identified four risk factors toward which their prevention project was directed: (1) early and persistent antisocial behavior, (2) academic failure at the elementary school level, (3) alienation and rebelliousness, and (4) early initiation of problem behaviors. Resources Assessment: Eight community organizations (such as the Norfolk Youth Services Citizen Advisory Board, Norfolk Interagency Consortium of Services to Youth, and the Human Services Council) played a role in the development of the Title V project. These organizations provided for coordination and integration of prevention activities directed at deficiencies in protective factors that result in (1) lack of bonding with positive role models, (2) lack of involvement in positive leisure activities, (3) lack of bonding (attachment) to school, and (4) lack of prosocial opportunities and academic success. Goals: The goal was to reduce the number of out-of-school suspensions at the elementary school level by 25 percent and increase safety and security by offering students training on alternatives to violent behavior to reduce recidivism for the same offense by Saturday School program participants by 60 percent. Objectives: Norfolk established five objectives for its Title V project: (1) reduce the number of suspensions at the elementary school level; (2) provide students with coping skills to resolve conflicts in positive ways; (3) increase parental involvement in the academic and disciplinary life of their children; (4) provide students with alternatives to violent behavior; and (5) strengthen the partnership between home and school. Intervention Strategy: Alternatives to out-of-school suspension included a Saturday School option and provision of transportation to selected students to attend the AVBP. At the time of our visit Norfolk was developing programs and services to meet the needs of acting-out youth by establishing mentoring programs to provide positive role models; incorporating conflict resolution, decision-making, and life skills into existing recreational programming; targeting tutoring programs at children failing academically; and expanding recreational opportunities for all youth. Project Description: The Norfolk Effective Prevention Program was directed at elementary school students who were candidates for suspension from school. The program offered two components; the Saturday School program and AVBP. Twelve Norfolk schools participated in the Saturday School program, which was available to 36 elementary schools. Parents were required to attend a 1-hour session. This was to provide the parents with information about schools and services available in the community and how to access those services, including where they could get additional help. While the parents are in training, the child is participating in a 3-hour session that focuses on the misbehavior that led to the referral to the Saturday School and assists the child in identifying ways to eliminate these problems. The effort grew out of a desire to increase the use and availability of school resources, e.g., keep the schools open on nights and weekends to meet community needs. The program operated in eight middle schools. The second component of the Norfolk Effective Prevention Program provided transportation to selected students to AVBP, which helped middle school students who exhibited tendencies toward violent behavior, such as fighting and hitting. Students were to be released from school during the school day and transported to another location where they receive intensive training on ways to reduce violent and combative behaviors. The program commenced operation on April 29, 1995. By December 1995, 65 students had participated in Norfolk’s Effective Prevention Program, 44 of them in the Saturday School program which became available in September 1995, and the remaining 21 students in the AVBP. Virginia state officials informed us that 70 percent of the participants completed the AVBP. Target Area/Group: Elementary students recommended for suspension from school due to non-law-related violations who had not become constant and consistent discipline problems were candidates for the Saturday program, while students with more serious violations, who had been issued suspensions and who continued to exhibit aggressive behavior became candidates for transportation assistance to the AVBP. Students were drawn from 36 elementary schools in the city of Norfolk. Project Period: April 1995 to March 1997. Ann H. Finley, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO provided information on the Juvenile Justice and Delinquency Prevention Amendments Act's Title V incentive grant program for local delinquency prevention, focusing on: (1) the program's status and what types of projects are being funded; (2) the number of states and local governments that applied for Title V funds; (3) the amount of 1994 and 1995 grants that have been awarded as of December 31, 1995; (4) the sources and amounts of matching funds committed to local delinquency prevention projects; (5) whether eligibility requirements have affected Title V participation; and (6) other types of funding that has supported local delinquency prevention activities. GAO found that: (1) as of March 1996, $29.6 of the $33 million in 1994 and 1995 Title V grants had been awarded to 54 jurisdictions and an additional $1 million was awarded for 6 grants to local jurisdictions under the Safe Futures Program; (2) of the 51 jurisdictions reviewed, 45 awarded $18.9 million in Title V subgrants to local governments to support 277 delinquency prevention projects; (3) these subgrantees spent about $3.6 million of their funds as of December 1995; (4) 44 jurisdictions received $17.2 million in Title V matching funds for 1994 and 1995; (5) 7 jurisdictions did not award subgrants; (6) the 2-year total funding for the 277 local delinquency prevention projects was about $36 million; (7) most of these projects addressed delinquency affecting youth in early or middle adolescence; (8) over 75 percent of the projects emphasized the prevention of delinquent activity, attempted to reduce delinquent behavior and recidivism, and addressed multiple risk factors; (9) most projects used community-based outreach intervention programs and services as well as some sort of parent training in conflict resolution and after-school program; (10) local governments generally reported that act's core requirements were not a barrier to local government participation in Title V program activities; (11) while 19 jurisdictions devoted $319 million in funds to support delinquency prevention activities in 1995, 31 jurisdictions did not know how much local or private funding was devoted to these activities; and (12) in 1995, nine other federal agencies reportedly spent $4.3 billion to support juvenile delinquency prevention, juvenile justice, or youth-related programs.
The wastes in Hanford’s 177 underground tanks are a by-product of more than 50 years of nuclear weapons production. (Fig. 1 shows a typical tank farm under construction.) Storing, managing, and cleaning up these wastes pose many challenges. Some tanks, for example, contain flammable gas or potentially combustible organic compounds. Injected into the tanks as liquids, the wastes have assumed a variety of forms as they have settled and recombined over the years. These forms include sludge and a hard “saltcake” that may have to be pulverized before it can be removed from the tanks. Wastes in at least 67 single-shell tanks have leaked or are assumed to have leaked into the ground as the concrete-and-steel structures have deteriorated. DOE’s program for addressing these wastes, called the Tank Waste Remediation System, calls for a series of actions to chemically characterize the waste in the tanks, remove it from the tanks, and prepare it for permanent disposal during the several hundred thousand years in which some of it will remain dangerously radioactive. This program is expected to cost $36 billion over its life-cycle. Waste characterization is the first major action. Since fiscal year 1989, the first year for which reliable cost data exist, DOE has spent about $260 million on characterization. The purpose of characterization is to provide sufficient information for safe storage of the waste in the tanks while awaiting the development of processes for remediating it, as well as for designing the steps of the remediation process itself. These steps include removing the wastes (retrieval), separating them into low-level and high-level portions (pretreatment), treating them (vitrification), and preparing them for permanent disposal. DOE began characterizing the waste in Hanford’s tanks in 1985. Since then, its efforts have repeatedly fallen behind schedule. DOE’s March 1994 schedule resulted from the renegotiation of an agreement originally signed in 1989. This Tri-Party Agreement with Washington State’s Department of Ecology (Ecology) and EPA initially called for completing the characterization of single-shell tanks by 1998. DOE subsequently found itself unable to comply with the characterization deadline and renegotiated the agreement. The revised Tri-Party Agreement calls for characterizing all 177 tanks for retrieval, pretreatment, treatment, and disposal by September 1999. In a separate agreement to address concerns raised by the Defense Nuclear Facilities Safety Board, an independent executive-branch oversight body, DOE agreed to another set of characterization requirements related primarily to the safe storage of the waste. Under this agreement, DOE was to characterize the 54 “watchlist” tanks—those with known or suspected safety problems, such as potential flammability—by October 1995 and to sample and assess safety conditions associated with all 177 tanks by October 1996. DOE and Westinghouse have made limited progress in meeting the agreed-upon deadlines for characterization. Despite some recent improvements in sampling capability, Westinghouse remains behind schedule in taking samples from the tanks. Its responses to reporting requirements have consisted mainly of summarizing previously known information about the tanks’ contents. In September 1995, DOE acknowledged that the Tri-Party Agreement and Safety Board characterization deadlines could not be met and proposed a two-phased approach for characterizing tank wastes that would extend characterization activities well beyond 1999. As of the time we completed our work, however, DOE had not formally notified Ecology that it could not meet the Tri-Party Agreement deadlines. DOE and Westinghouse have made limited progress in meeting the Safety Board and Tri-Party agreement deadlines (see fig. 2). The Safety Board agreement called for taking and analyzing about 216 of 408 core samplesby September 30, 1995. As of that date, Westinghouse had completed 42. The Safety Board agreement also called for taking core samples from all 54 watchlist tanks by October 1995; by that date, Westinghouse had obtained core samples from 10 watchlist tanks. At Westinghouse’s current estimated sampling pace, all 408 core samples will not be done until 2002—more than 5 years after the agreement’s October 1996 deadline. Similarly, while the revised Tri-Party Agreement calls for full characterization of tank wastes by September 1999, DOE’s recent planning documents show that at the current sampling pace, DOE does not expect to meet this requirement until September 2004. Westinghouse has shown some improvement in its ability to take core samples. In fiscal year 1994, Westinghouse completed only 3 of 13 planned core samples, but in fiscal year 1995, it completed 39—the same number it had estimated it would be able to take. However, the 39 were concentrated in fewer tanks than Westinghouse had planned. Westinghouse currently estimates that it will be able to take about four core samples per month through March 1996 and five per month thereafter. During fiscal years 1994 through 1995, Westinghouse also used other types of samples besides core samples to augment its understanding of tank wastes, completing 177 of 210 planned samples. These other methods yield results that are generally considered less comprehensive than core samples, because they usually do not involve a top-to-bottom sampling of the waste. However, the results of these samples can supplement what is learned from core samples because they can provide information on tank vapors and liquids that core samples may not provide. Even with these other efforts, to date no tank has yet been sufficiently characterized either to meet the Safety Board’s sampling requirements or to support any of the subsequent steps in the waste treatment process. The Director of DOE’s characterization division said that he was unable to estimate when the characterization of any tank would be completed. DOE and Westinghouse officials believe that characterization difficulties are not affecting the safe storage of tank wastes. According to the assistant manager of DOE’s tank waste remediation program, Westinghouse has placed controls on tank farm operations to reduce the risk of an unintentional release. The controls include using sparkless equipment and avoiding certain types of drilling procedures. However, these controls have made it more difficult for DOE to maintain its desired sampling rate. DOE and Westinghouse characterization managers have acknowledged that they will not meet the Safety Board or Tri-Party Agreement deadlines. They have prepared a draft revision of the original implementation plan agreed to with the Safety Board. This draft, which does not specify a completion date, is discussed in more detail in the next section. DOE and Westinghouse also acknowledged the need to propose changes to the Tri-Party Agreement, but at the time we completed our work, they had not yet submitted a formal proposal to Ecology or EPA. However, Ecology has informed DOE and Westinghouse that in the state’s view, the inability to obtain adequate samples does not provide sufficient grounds for renegotiating the characterization milestones of the Tri-Party Agreement. Ecology expects DOE to formulate a plan to compensate for past inadequacies and to meet commitments under the current Tri-Party Agreement schedule. The revised Tri-Party Agreement requires DOE to submit, for Ecology’s approval, characterization reports on a certain number of tanks each year through 1999. All 23 of the reports submitted through fiscal year 1994 were based mostly on existing “historical” data about the wastes, supplemented with limited sampling results. The 30 reports submitted on September 30, 1995, were also based primarily on historical information, although they contained some results of samples taken since May 1989. Reaction to the value of these reports is mixed. DOE accepted the reports, but Ecology, the body that must approve the reports, has criticized their contents, including Westinghouse’s extensive reliance on historical information. Rather than approve the 23 reports submitted in fiscal years 1993 and 1994, Ecology received them on the condition that Westinghouse develop additional characterization data and resubmit the reports for approval within 2 years. In connection with the fiscal year 1995 reports, Ecology’s characterization team leader said that 25 of the 30 reports were inadequate because they contained mainly historical information, modeling results that had not been verified, and limited analytical results. He considered the results of the analyses of the most recent samples insufficient because they were limited to determining whether tank wastes were being maintained in a safe condition and did not contribute to any remediation step. While many factors have contributed to the slow pace of the characterization effort, the primary reasons for slow progress are that (1) DOE and Westinghouse have not yet determined how to successfully draw reliable samples and characterize the waste and (2) managerial weaknesses with the characterization program and other aspects of the tank farms have exacerbated delays and contributed to operational inefficiencies. DOE and Westinghouse have been unable to develop and implement a characterization approach that has been successful in meeting the requirements of the Tri-Party Agreement or their commitments to the Safety Board. Three different approaches have been attempted since the Tri-Party Agreement was signed in 1989, and in each case, the approach has not been successfully implemented. As a result, DOE and Westinghouse are still having difficulty answering fundamental methodological questions that have existed since the characterization program began: how to take reliable samples, what types of and how much sampling data to gather, and how to reliably predict the waste constituents on the basis of sampling and other data. Limited progress in taking samples has led DOE and Westinghouse to search for another characterization approach that would satisfy the requirements of the Tri-Party Agreement as well as their commitments to the Safety Board. The latest approach, proposed in September 1995, aims to evaluate the program on a continuing basis while taking samples and collecting data, rather than following a fixed schedule. DOE now views sampling as an iterative process in which the total number of samples needed depends on the results of the initial samples taken. This viewpoint is in contrast to DOE’s earlier strategy that required a fixed number of samples. DOE’s latest characterization strategy has two phases: Phase one concentrates primarily on ensuring safe storage while demonstrating the approach’s overall validity. More specifically, phase one includes reducing the number of core samples from 408 to 109, which would be done by grouping tanks believed to contain similar wastes and taking core samples from the 25 to 30 tanks considered representative of the various groups; supplementing data obtained from the reduced number of core samples with historical data, temperature and moisture measurements, and data obtained from other types of samples (such as vapor and auger samples); and using computer models to analyze the various data in order to predict the tanks’ contents and evaluate the risk of potential combustion. Westinghouse characterization project officials acknowledged that if assumptions about the validity of this approach are not successfully demonstrated during fiscal years 1996 and 1997, more core samples and other types of samples may be needed in phase one. Phase two of Westinghouse’s approach would focus primarily on the characterization of tank wastes to support the treatment and disposal steps. The time frames, funding requirements, and sampling strategies for phase two are currently undefined. After evaluating the results of phase one, Westinghouse plans to formulate and implement phase two beginning in fiscal year 1997. The September 1995 characterization proposal, while more in line with what DOE can realistically expect to accomplish in the next several years, has generated concerns about whether it will provide sufficient characterization information to proceed with remediation efforts. These concerns have been expressed by regulators, advisory bodies, and other persons involved with the remediation effort. Among the main concerns about the adequacy of the proposed approach are whether historical and sampling data can be reconciled, whether the computer models will reliably predict actual quantities of specific tank wastes, and whether the information being developed will be thorough and accurate enough to proceed with the various steps in retrieving and treating the wastes. Having accurate knowledge of the amounts of waste components is important in choosing pretreatment and treatment technologies and designing facilities. Westinghouse’s new approach would rely to a significant extent on historical data for purposes of characterization. However, as far back as October 1991, DOE recognized that its historical data were incomplete and unreliable. The Safety Board has expressed concern that historical data are not complete, reliable, or representative because inadequate operational controls have resulted in limited information about (1) the specific types of waste placed in tanks and (2) the chemical processes occurring in the tanks. DOE and Westinghouse have acknowledged that such disparities exist and plan to determine how to resolve them in phase one of their proposed approach. Computer models are key components in Westinghouse’s approach to predict tank waste constituents and evaluate potential safety problems. However, their reliability is largely untested. Westinghouse plans to test the models’ reliability during a “demonstration” period in fiscal years 1996 and 1997, when some tank wastes will be characterized and compared to the models’ predictions. The Safety Board, among others, has raised concerns about reliability. The Board’s technical staff concluded that “Significant portions of the new strategy are based on simplified models and simulants that may not adequately represent tank wastes.” The limited information available to date shows examples of substantial differences between the models’ projections and the data obtained through core samples. In September 1995, Westinghouse reported the results of 144 possible comparisons between the models’ projections and the core sample results in 12 tanks. These comparisons included waste constituents, such as chromium and phosphate, that are important in determining the volume of glass needed for the vitrification process. The core sample results were at least three times higher or lower than the models’ projections in about 25 percent of the comparisons. In addition, for another 20 percent of the comparisons, the sample results showed the presence of such constituents as cesium 137, phosphate, and total organic carbons, while the models’ projections indicated that these constituents did not exist in the tanks. Considerable uncertainty exists about the characterization information needed to design methods and facilities for cleanup. Westinghouse is currently developing criteria known as data quality objectives; these criteria specify what information will be needed for each step in remediating tank wastes. Although Westinghouse’s approach is based on obtaining sufficient data from a reduced number of core samples, in 1992 Westinghouse told us that between 2 and 14 core samples may be needed to adequately characterize a single tank. We asked an independent nuclear engineering consultant to review Westinghouse’s approach to determine if it will produce sufficient information about the waste to meet DOE’s objectives. His review raised concerns about the reliability of the information that will be developed for most of the tanks under Westinghouse’s proposed approach. The consultant concluded that the proposed approach involving limited sampling may yield adequate characterization information for about 31 single-shell tanks and several double-shell tanks believed to contain waste that is relatively homogenous, but he concluded that the approach may be considerably less reliable for approximately 135 other tanks. For those tanks, the consultant concluded that more core samples than originally planned, rather than fewer, may be needed to reconcile disparities between the tanks’ waste contents as derived from sample analyses and the tanks’ waste contents as deduced from historical data. The technical complexities associated with characterizing tank wastes highlight the need for an effective management system for detecting and addressing problems. However, such a system has been lacking in the characterization program. Instead, technical and safety problems have gone uncorrected for considerable periods, either because managers were unaware of the problems or because they were slow to take action on problems they knew about. In an April 1995 letter to the Safety Board, the Secretary of Energy acknowledged that numerous problems affecting the characterization program have been caused by ineffective management. We found instances involving operational and safety-related problems in which Westinghouse or DOE managers were initially unaware of circumstances that caused delays or increased safety risks. For example: Before being placed into service, a new rotary-mode sampling truck was inspected in July 1994, 3 months later than planned. Westinghouse inspectors reported that welds on the truck did not meet design code requirements. Consequently, the truck was inoperable for an additional 3 months; its unavailability contributed to Westinghouse’s obtaining considerably fewer rotary-core samples in 1994 than originally planned. The report’s authors stated that Westinghouse management lacked commitment in identifying and tracking such deficiencies. Westinghouse managers were not aware that workers were operating the push-mode core sampler without an operable instrument called a bottom detector to prevent damaging or drilling through the bottom of a tank. When this practice was reported in February 1995, the report stated that the workers involved did not have adequate knowledge of safe sampling procedures. As a result, push-mode sampling was halted for more than 2 weeks while safety procedures were reevaluated and workers received additional training, according to Westinghouse’s deputy operations manager for characterization. We also found instances in which management was aware of problems with characterization—or with tank farm maintenance activities affecting characterization—but was slow to address them. For example: In 1993, virtually all sampling activity was suspended for more than 6 months following a safety violation in which a maintenance worker contaminated himself and others while using unapproved procedures to unclog a blocked drain. This incident, referred to as the “rock-on-a-rope” occurrence because of the extremely primitive methods used, was the culmination of a series of incidents that indicated deficiencies in operations at the Hanford site, including inadequate procedures and personnel’s lack of awareness of important technical procedures. DOE took limited actions on these incidents until this substantial event occurred. If wind speeds exceed 15 miles per hour—a common occurrence at Hanford—core samples cannot be taken unless sampling equipment and operations are protected from the wind. Although sampling delays associated with Hanford’s windy conditions have been apparent for years, no solutions were advanced until January 1995, after the Safety Board had suggested many times for more than a year that a wind barrier be fabricated. These examples of management’s ineffectiveness are supported in several broader studies. In October 1990, for example, the Safety Board issued a statement concluding that management’s attention to the characterization effort was inadequate. More than 2 years later, an internal DOE review found that there were “significant weaknesses in the safe control, adequate management, and technical implementation of field, laboratory, and supporting project activities.” In January 1995, about 2 years later, DOE acknowledged to the Safety Board that confusion still existed over who, at the management level, was responsible for managing and coordinating various characterization activities. “the loose and ineffective structure of the technical and administrative organizations assigned to characterization of the waste tanks. That has caused numerous delays for relatively trivial reasons that could have been readily overcome by a strong and determined manager with sufficient authority. . . .” DOE’s problems in keeping characterization on schedule affect more than just compliance with various agreements. Other problems include potential cost increases for the characterization effort, inefficient use of a laboratory, and uncertainty about carrying out other aspects of the remediation program. Delays in characterizing tank wastes raise the likelihood that DOE’s most recent estimates of total characterization costs are understated. Since fiscal year 1989, the earliest date that reliable cost data were available, DOE has spent about $260 million on characterization, and in August 1995, it estimated that it would need to spend at least $569 million more through fiscal year 1999 to meet the Safety Board and Tri-Party agreements. Characterization work beyond fiscal year 1999 will most likely include sampling the majority of the 177 tanks to obtain information to support retrieval, pretreatment, treatment, and disposal of the wastes. The amount of additional program funds needed after fiscal year 1999 to support these characterization activities has not been estimated. The director of DOE’s characterization division said that DOE will not know for several years what these program costs are likely to be. Delays in sampling the wastes affected utilization of the 222-S analytical laboratory, a facility Westinghouse operates at Hanford. This laboratory, which was recently expanded to deal with the expected volume of incoming samples, has a staff of more than 143 full-time-equivalent positions and a budget of nearly $12 million. Westinghouse anticipated that about two-thirds of the laboratory’s capacity would be needed to analyze the samples. However, during fiscal year 1995, the lower-than-expected volume of samples required only 25 percent of the total capacity available. Laboratory personnel used an additional 20 percent of capacity during the year in various activities, such as developing internal procedures and process controls. Consequently, more than half of the laboratory’s capacity went unutilized. According to the analytical laboratory’s performance documents, sample analysis has increased to 53 percent of total capacity in the first 2 months of fiscal year 1996 because more samples had recently been obtained. The most significant effect of the delays may be on the rest of the steps in the remediation process—retrieval, pretreatment, treatment, and disposal. These other steps of the process depend on the adequacy and quality of characterization information. In particular, data on the quantities and chemical properties of such waste components as chromium, phosphate, cesium 137, zirconium, and plutonium are important for determining the most efficient pretreatment technologies and the design of treatment facilities. Insufficient and untimely characterization information could either delay the construction of those facilities or cause the construction to proceed without sufficient information, increasing the risk of costly errors. DOE and Westinghouse do not share the view that limited characterization information could jeopardize the success of subsequent steps in the remediation process. Despite slower-than-expected progress on characterization, DOE plans to begin testing equipment for retrieving tank wastes in fiscal year 1996 and to begin designing treatment facilities the following year. According to the assistant manager of DOE’s tank waste remediation program, DOE believes that (1) current knowledge is sufficient to proceed with the initial facility design and retrieval of selected wastes and (2) additional characterization information will be available before the design reaches a critical phase. Ecology’s position is that sufficient data to begin designing treatment facilities currently exist, but if DOE is unable to characterize tank wastes at its expected rate, the lack of characterization information could ultimately jeopardize the success of the remediation program. Rather than constructing and operating its own facilities to treat the tank waste, DOE is considering privatization as an alternative approach. Under such an approach, a company or a consortium of companies from the private sector would finance, design, build, and operate pretreatment and treatment facilities and deliver the finished product—in this case, vitrified waste encased in stainless steel containers—to DOE for a fee. DOE expects this approach to save billions of dollars because the potential for innovation in the private marketplace could lead to greater efficiencies and improved performance. Authorization to pursue this approach was obtained from the Secretary of Energy in late September 1995. In November 1995, DOE drafted a request for proposals to be reviewed by interested parties and expects to issue this request in February 1996. Whether or not DOE moves ahead with privatization, it will be responsible for providing the characterization information necessary to proceed with remediating the tank wastes. DOE’s current strategy of proceeding with limited characterization information could increase the risk that facilities may not perform as needed and/or may need costly modifications to perform safely and efficiently. For example, DOE has conflicting data on the quantities of key waste constituents, such as chromium and phosphate, that affect the quality and durability of the vitrified glass product in which the waste will be immobilized. The independent nuclear engineering consultant we asked to review the program believes that without more reliable information on these and other elements, proceeding to construct the facilities is risky because they (1) may be built with insufficient capacity to process wastes containing greater-than-expected quantities of certain components or (2) may be built with excess capacity, resulting in needless expense. In our previous work on DOE’s waste-processing facilities, we found that DOE had experienced major start-up problems, cost increases, and schedule delays caused in part by a “fast track” approach where construction began before major technical uncertainties were resolved. “ has been unsuccessful in demonstrating tangible results on waste characterization; total program performance is considered a significant deficiency for the evaluation period. Limited progress has been demonstrated on field sampling improvements, technical basis development, or program optimization through process and productivity improvements. Adverse cost and schedule performance during the evaluation period indicate a strong probability to exceed available program funds without corrective action.” DOE and Westinghouse have begun some efforts to bring greater management control to the characterization effort, which had four different DOE managers between August 1994 and July 1995. In February 1995, DOE changed the effort from a “program” to a “project” with a single manager. The Westinghouse project manager reports directly to the Westinghouse vice president for tank waste remediation. DOE and Westinghouse also clarified the lines of reporting accountability within the program and increased the amount of time that managers spend in the field observing characterization activities. Other accomplishments that DOE and Westinghouse reported over the past 8 months include the completion of a variety of technical documents outlining DOE’s new characterization strategy; characterization criteria, called data quality objectives, describing tank waste safety, disposal, and historical data requirements; historical tank content estimate reports for all tanks; and 42 vapor tank sampling and characterization reports to address noxious vapor concerns. To enhance their capability to sample tank wastes, DOE and Westinghouse deployed three rotary-mode core sampling trucks and placed into operation an X-ray imaging device to provide real-time data on the recovery of core samples. These changes hold some promise for improvement, but it is too early to tell if they fully address the operational deficiencies and management weaknesses that have plagued the characterization program to date. “Information provided by the contractor through the Data Quality Objective (DQO) process has been insufficient to determine when a tank is fully characterized and the need for further sampling is no longer required. . . . The root of this problem is a lack of adequate discipline in the definition of characterization needs and objectives, and the subsequent operations executed to accomplish those needs/objectives. The result has been excessive cost due to inefficient sampling and an inadequacy of data required to meet DQOs.” After more than 10 years and about $260 million invested in trying to characterize the tank wastes at Hanford, little definitive progress has occurred. Disagreement still exists over how much and what kind of characterization data are needed to reliably predict actual quantities of waste constituents and build appropriate treatment facilities. Inadequate management attention has impeded solutions to these problems. DOE’s current proposal raises questions about whether enough characterization information will be available to build effective facilities for retrieving the wastes and preparing them for permanent disposal. If the information proves to be inadequate, further technical problems and cost overruns are likely, jeopardizing the success of the overall program and increasing the potential that funds may be used unwisely. All parties, including DOE, Westinghouse, potential private contractors, and the Congress, need further assurance that the characterization program has a sound technical foundation. Answers are needed to such questions as how much sampling and what kinds of sampling methods are sufficient to reliably characterize a tank’s contents; how to reconcile disparities between existing data on tank contents and actual waste sample data; and how much characterization information is needed before the design and construction of pretreatment and treatment facilities should begin. Without this information, it will be difficult to reliably predict when the overall program will be done or how much it will cost. Furthermore, these uncertainties could undermine the savings DOE expects to realize by privatizing the tank waste remediation program. To ensure that Hanford’s tank waste characterization program will provide a sound foundation for designing and building waste treatment facilities, we recommend that the Secretary of Energy commission an independent review of the characterization program, using an organization such as the National Academy of Sciences, to resolve questions about the technical adequacy of Hanford’s characterization strategy. The review should focus on determining (1) how much and what kind of information is sufficient to reliably characterize the tank wastes and predict the quantities and conditions of the waste constituents and (2) the amount and quality of characterization information needed for DOE to proceed with the design and construction of waste treatment facilities. To ensure that funds for the overall tank waste remediation program are spent as wisely as possible, we recommend that the Secretary of Energy defer funding the construction of pretreatment and treatment facilities until (1) the technical adequacy of the characterization program has been confirmed or established and (2) sufficient waste characterization information is available to reliably define the requirements of those facilities. We provided a draft of this report to DOE, the Westinghouse Hanford Company, and the Washington State Department of Ecology for their review and comment. We discussed the report with officials from DOE, Westinghouse, and Ecology, including the assistant manager for DOE’s Tank Waste Remediation System, the director of DOE’s characterization division, and the director of DOE’s safety division; the director of Westinghouse’s tank waste characterization project and the ecology coordinator for Westinghouse’s Tank Waste Remediation System. Overall, the officials agreed that the report was accurate and factual; however, DOE, Westinghouse, and Ecology disagreed with several aspects of the report, including the tone and substance of our conclusions and recommendations. In addition, DOE, Westinghouse, and Ecology provided annotated comments on technical aspects of the draft. We have incorporated those comments where appropriate. DOE said that while the report accurately describes past difficulties with the characterization program, the report does not adequately recognize the performance improvements accomplished since February 1995. DOE provided a list of accomplishments that included developing program strategy documents, completing tank safety analyses and tank characterization reports, performing laboratory testing of wastes, acquiring new drilling equipment, and increasing the number of samples taken. In determining DOE’s progress, we used the criteria in the agreements DOE has signed with Washington State, EPA, and the Safety Board. We also developed and compared data on waste samples planned and accomplished, since sample data are essential for completing characterization. In addition, we documented other activities DOE accomplished that could help in meeting characterization objectives and included many of them in our report. We believe that we have adequately emphasized that DOE recently has made more progress in taking and analyzing samples and in completing other characterization activities. DOE and Westinghouse were concerned that our report could be interpreted to mean that because of problems with the characterization program, DOE could not ensure that the tanks are safe. DOE and Westinghouse officials believe that the tanks are safe because of the controls they have put in place over tank farm operations, including sampling activities. These controls are designed to prevent harmful releases due to such conditions as high temperature and/or flammability risks. We did not evaluate the level of safety associated with the tank wastes and did not intend to imply that the tanks are unsafe because of deficiencies in the characterization program. We revised our report to clarify that DOE believes the tank wastes are being safely stored. However, our report does explain that the Safety Board has directed DOE to conduct additional characterization of tank wastes to ensure that they are safely stored. Until those characterization activities are complete and the waste constituents are better understood, DOE has placed controls over the tanks to provide an added level of assurance. While DOE officials agreed with the value of having an outside technical review of the characterization program, they noted that DOE had recently begun such an effort with one of its contractors, Pacific Northwest National Laboratory. Specifically, DOE is planning to fund a study team to help resolve critical uncertainties related to the safety of the tank wastes. The team will also develop an approach to integrate data needs to support all aspects of the tank waste program, from current operations to treatment and disposal of the wastes. This proposal was drafted in December 1995, after we completed our field work. On the basis of the initial documentation DOE provided, it appears the study team will focus on resolving uncertainties related to tank safety rather than evaluate the viability of the characterization program. We continue to believe that a group, independent of ongoing tank waste and related DOE activities, needs to address the technical feasibility of the characterization strategy. DOE and Westinghouse disagreed with our view that difficulties in characterizing the wastes could affect the remaining steps in the disposal program, including the design and construction of facilities to treat the wastes. DOE said that at sampling rates achieved since March 1995, it expects characterization of the wastes to be complete by 2004, which DOE believes is adequate to support the disposal program. In addition, DOE said that enough information exists now to proceed with the design and construction of treatment facilities. DOE expressed concern that any deferral of funding for remediation could make it difficult to keep the project moving forward to accomplish waste treatment and disposal. Ecology officials shared similar concerns. We believe our report accurately describes the potential effect that characterization difficulties could have on the remaining steps in the remediation program. For example, even DOE’s latest schedule for completing characterization could be in jeopardy. First, DOE is unsure if Westinghouse can maintain a core sampling rate of four to five samples per month. Recent additional controls placed on the tanks and other sampling problems may make this sampling rate difficult to achieve. Second, DOE’s projected completion of characterization by 2004 is based on a characterization approach that has not been validated. Third, quantities of certain waste constituents need to be determined to minimize uncertainties that affect the construction of treatment facilities. On the basis of our discussions with DOE and Ecology, we agree that facility design activities could proceed as characterization work continues, and we have modified our report accordingly. However, on the basis of this report and our previous work, we continue to believe that construction of treatment facilities should not be funded until the technical adequacy of DOE’s characterization strategy is confirmed or established by independent sources and sufficient waste characterization information is available to reliably define the requirements of those facilities. Most of our work was performed at DOE’s Hanford site in Washington State. To determine DOE’s progress in meeting the tank waste characterization commitments, we reviewed tank waste characterization milestones that DOE committed to with the Washington State Department of Ecology and the Environmental Protection Agency in the Tri-Party Agreement and with the Defense Nuclear Facilities Safety Board. We compared these commitments with DOE’s actual sampling results through September 1995, the latest month for which data were available. To identify impediments to progress and determine what impact these impediments could have, we reviewed tank characterization reports, engineering studies, characterization technical basis documents, characterization project strategy documents, and other materials. We reviewed Ecology and Safety Board reports and correspondence with DOE on concerns associated with the characterization program, and we reviewed program documents detailing program improvements in the management of the characterization project and in Westinghouse’s sampling capability. We also reviewed DOE’s and Westinghouse’s cost estimates of the characterization program and the Tank Waste Remediation System. We supplemented our reviews of reports and other documentation by interviewing DOE and Westinghouse officials, including the assistant manager for DOE’s Tank Waste Remediation System, the director of DOE’s characterization division, Westinghouse’s vice president for tank waste remediation, and various others with program responsibilities. We also interviewed officials from oversight agencies, including Ecology’s Tank Waste Remediation System coordinator and characterization team leader, and Defense Nuclear Facilities Safety Board members and their staff. We conducted our work from May 1995 through January 1996 in accordance with generally accepted government auditing standards. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight not later than 60 days after the date of this letter and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this letter. We are sending copies of this report to appropriate congressional committees and other interested parties. We will also make copies available to others on request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to this report are listed in appendix I. William R. Swick, Core Group Manager Thomas C. Perry, Evaluator-in-Charge Robert J. Bresky, Staff Evaluator Drummond E. Kahn, Staff Evaluator Stanley G. Stenersen, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Energy's (DOE) progress in characterizing its tank waste at its Hanford site, focusing on: (1) whether it has met its characterization deadlines; (2) the impediments to meeting characterization deadlines; and (3) the impact that continued delays have on the characterization process. GAO found that: (1) over the past 10 years, DOE has spent over $260 million and made little definitive progress in characterizing the tank wastes at Hanford; (2) DOE has not been able to meet characterization deadlines for the 54 tanks with known safety problems; (3) the DOE contractor has been unable to characterize any of the 177 tanks as ready for remediation; (4) DOE and its contractor have had problems performing reliable top-to-bottom samples, gathering sampling data, reconciling tank contents, and developing an effective tank characterization management system; (5) disagreement exists as to what kind and how much information is needed to reliably predict actual waste quantities and build appropriate treatment facilities; (6) Congress, DOE, and private contractors need better sampling and characterization information to reliably predict total program costs; and (7) these uncertainties could undermine the savings DOE expects to realize by privatizing the tank waste remediation program.
Nearly a decade ago, the Department of Energy (DOE) embarked on a mission to deal with the environmental legacy of the Cold War. This DOE mission, which is expected to continue for many years, involves a number of activities, with the most ambitious and far-ranging being the cleanup of the hazardous and radioactive contaminants that resulted from the production of nuclear weapons at DOE facilities. The challenges of this task are technical, institutional, and economic. For example, thousands of tons of radioactive waste must be treated and put into permanent storage; contaminated soil must be stabilized; contaminated water must be treated; and nuclear reactors and materials-processing facilities must be decontaminated, decommissioned, and demolished. In some cases, no safe and effective technology is currently available to address the more complex contamination problems. In June 1998, DOE estimated that it will be very expensive, about $150 billion, to clean up the legacy of the Cold War. However, DOE has also stated that the cost of cleaning up its contaminated facilities and sites can be significantly reduced through the use of innovative cleanup technologies. It supports the development of these technologies through its Office of Science and Technology (OST), within the Office of Environmental Management (EM). In 1989, DOE established EM to clean up and restore its contaminated facilities and sites in compliance with federal and state environmental laws and regulations. The Congress also directed the Secretary of Energy to establish “a program of research for the development of technologies useful for (1) the reduction of environmental hazards and contamination resulting from defense waste, and (2) environmental restoration of inactive defense waste disposal sites.” In response, DOE established the Office of Technology Development within EM to develop innovative technologies to support the waste cleanup and restoration efforts of EM’s program offices—the Offices of Waste Management, Environmental Restoration, and Nuclear Material and Facility Stabilization. The Office of Technology Development was renamed the Office of Science and Technology in 1994, when basic science research for waste cleanup was added to its responsibilities. OST’s projects are intended to produce technologies that could accelerate cleanups, reduce costs, enable cleanup activities for which there are no existing cost-effective technologies, or reduce risks to cleanup workers. From fiscal year 1990 through fiscal year 1998, the Congress appropriated approximately $2.5 billion for OST’s development of innovative waste cleanup technologies, and OST has initiated over 700 projects. OST’s budget for technology development activities in fiscal year 1998 is about $220 million. OST requested a total of $180.5 million for technology development activities for fiscal year 1999. OST develops technology at DOE’s national laboratories, private companies under contract to OST, and universities. Although OST is responsible for technology development, DOE waste sites are responsible for selecting the technologies they will use, with the review and approval of the Environmental Protection Agency (EPA) and state agencies that regulate DOE’s cleanups, and with input from the public involved with the site. To serve sites’ needs for cleanup technology, OST is organized into five major remediation and waste management problem areas (termed “focus areas”). OST first established focus areas in 1994 in order to better serve the cleanup sites by concentrating technology resources on each of the major cleanup problems DOE faces. OST currently has the following five focus areas: Mixed Waste Characterization, Treatment, and Disposal. Known as “mixed waste,” this focus area addresses the large inventory of mixed, low-level, and transuranic waste; Radioactive Tank Waste Remediation. Known as “tanks,” this focus area addresses the hundreds of large storage tanks containing over 100 million gallons of radioactive waste; Subsurface Contaminants. This focus area addresses hazardous and radioactive contaminants in soil and groundwater and the remediation challenges posed by numerous DOE landfills; Deactivation and Decommissioning. This focus area addresses the deactivation, decommissioning, and disposal of aging and contaminated DOE weapons complex facilities; and Plutonium Stabilization and Disposition. This focus area addresses the over 20 tons of excess plutonium that must be stabilized. OST has established a lead field office to manage each focus area. For example, the Savannah River site manages the Subsurface Contaminants Focus Area. EM has also established site technology coordination groups in each of its field offices to identify sites’ technology needs, provide information to OST and its focus areas, and communicate information about OST’s technology development projects to the cleanup sites. In 1994, the Assistant Secretary for EM established the Environmental Management Advisory Board (EMAB) to provide the Assistant Secretary with information, advice, and recommendations on issues confronting the EM program, including advice on the development and deployment of innovative technology for waste cleanup. EMAB has about 25 members from industry; academia; and private, federal, tribal, state, and local environmental groups. EMAB has been very active in studying OST and recommending improvements in its operations. DOE believes it will be very costly and take many years to clean up its waste sites if only conventional technology is used. For example, using the conventional method of removing contaminants from groundwater can involve pumping and treating the water for 30 years or more. In addition, no technology exists to address some cleanup problems. For example, no technology exists for some aspects of removing and treating the radioactive waste now in large tanks at several major DOE facilities. Furthermore, some cleanup activities could be dangerous or impossible for cleanup workers unless innovative technologies, such as remote robotic devices to clean inside radioactive waste tanks, are used. Those in the Congress and in DOE who led the effort to establish OST believed that the use of innovative technology would reduce the cost of waste cleanup. For example, in 1995, DOE estimated that it would cost between $200 billion and $350 billion and take another 75 years to complete the cleanup. However, DOE also estimated that the use of new technologies could reduce cleanup costs by a minimum of $9 billion to as much as $80 billion, depending on the cleanup scenario. More recently, in 1997, the Army Corps of Engineers reviewed cost savings estimates developed by OST for 37 of its technology projects and concluded that these 37 projects could potentially save about $20 billion over the use of conventional technology. DOE believes that cleanup costs could significantly exceed current estimates if innovative technology is not used. We have issued a number of reports and testified on the operation and management of EM’s technology program. Among other things, we have identified obstacles to the deployment of innovative technology at DOE’s cleanup sites. In 1992, we reported that EM had not established key management tools, such as cost estimates and schedules, and decision points for evaluating technology projects. In January 1993, EM implemented a management plan for the technology program that incorporated our recommendations. The program established cost estimates and schedules for projects. EM also developed decision points (called gates) and related requirements for evaluating projects and making “go/no-go” decisions. In 1994, we reported that officials at DOE cleanup sites may not be familiar with innovative technologies and may fear that using new technologies may lead DOE to miss cleanup deadlines if the technology fails to perform as expected. In response to our report, OST took several actions, including establishing the site technology coordination groups discussed earlier, to improve communication on sites’ technology needs and the capabilities of newly developed technologies. In addition, to help ensure that development activities were concentrated on the most pressing cleanup needs, EM restructured its technology development program into the focus areas. In 1996, we reported that EM had not coordinated technology development to prevent duplication of effort, particularly between OST and the Office of Waste Management, which together had 60 projects to develop equipment to melt and immobilize waste. A key reason for the duplication was EM’s lack of a comprehensive list of technology development projects. EM subsequently developed a list of its technology projects. We also found that more technology projects were being started at the sites where the focus areas were physically headquartered. In following up on this situation in 1997, we found that this concentration had decreased. In 1997, we testified before the Subcommittee on Oversight and Investigations, House Committee on Commerce, that OST appeared to have made some improvements in its project management, but we had continuing concern about the extent of use of OST-developed technologies at DOE’s waste cleanup sites and the validity of OST data on deployments and expected cost savings. OST had also proposed a new initiative, Accelerated Site Technology Deployment (ASTD), to facilitate the use of its technologies. We expressed several concerns about the likely effectiveness of this initiative, which provides funding to DOE sites for the first use of an innovative technology. OST provided a total of approximately $26 million to 14 ASTD projects in fiscal year 1998. The Chairman and Ranking Minority Member of the House Committee on Commerce and the Chairman and Ranking Minority Member of its Subcommittee on Oversight and Investigations asked us to review EM’s Office of Science and Technology. Specifically, we were asked to determine (1) to what extent innovative technologies developed by OST have been deployed (used) at DOE sites and how this rate of deployment compares with the rates of other government organizations that develop environmental technologies; (2) what obstacles exist to deploying innovative technologies at DOE sites; and (3) what EM is doing to overcome obstacles to deploying innovative technologies. To determine the extent to which OST-developed technologies have been deployed at DOE sites, we obtained deployment information on OST’s projects from an OST management information system. This information provided project names and numerical identifiers, research stage, deployment sites (if any), and other project information, as of January 1998. We also obtained information about OST’s use and definition of the term “deployment” and OST’s procedures for entering and updating the information in this system. In order to assess the accuracy of OST’s deployment data, we used a random sample of the projects that OST listed as deployed, and we verified the claimed deployments with site operations officials. Upon finding a significant error rate, we used our sample results to estimate a range for the actual number of OST project deployments. The methodology for our verification is described in appendix I. To compare the rate of deployment for OST’s technologies with the rates of other government organizations that develop environmental technologies, we used database searches and contacts with federal agency officials to identify federal government programs that develop environmental technologies. We contacted the eight government programs whose research and development work was most comparable to OST’s in mission and scope, and two of these programs—EPA’s Superfund Innovative Technology Evaluation Program and the Department of Defense’s Environmental Security Technology Certification Program—were able to provide deployment data for comparison with OST’s data. We also contacted two private sector organizations that develop environmental technologies but found that they did not maintain deployment data. Because the two federal programs providing deployment data conduct technology demonstrations but not earlier phases of research and development, we identified OST projects that had reached a similar stage of maturity to provide an equitable group for comparison. (See app. II for a detailed discussion of these two programs.) To identify obstacles that exist to deploying innovative technologies at DOE sites, we first reviewed past reports on this subject by GAO, DOE, and advisory groups to DOE. In order to obtain more current and specific information about obstacles to deployment and EM’s progress in overcoming them, we visited five DOE sites: Hanford (Washington State), Savannah River (South Carolina), Oak Ridge (Tennessee), Fernald (Ohio), and Lawrence Livermore National Laboratory (California). We selected these sites to provide varied perspectives: The first three sites are among the largest DOE cleanup sites; Fernald is far along in its cleanup efforts and represents a medium-sized cleanup effort; and Lawrence Livermore has a smaller cleanup effort and budget. For the site visits, we identified OST-developed technologies that were either selected for use at the site, considered for use but not selected, or potentially applicable to the site’s cleanup problems. We identified technologies to discuss with site officials from our meetings with managers of OST’s focus areas, records maintained by EM’s Office of Environmental Restoration, and discussions with headquarters EM officials. The technologies were judgmentally selected to provide coverage of (1) EM’s various cleanup challenges and the related OST focus areas and (2) innovative technologies selected or not selected for use. We discussed with EM field personnel and contractor staff the obstacles they faced to using particular technologies, the ways they addressed and overcame these obstacles (for deployed technologies), and the reasons they did not select the technologies. We discussed a total of 30 OST-developed technologies with one or more of the five sites and obtained documentation on related selection decisions. We analyzed this information to identify (1) commonly cited obstacles to deployment and (2) the means by which sites overcame these obstacles in those cases in which OST-developed technologies were selected or in use. In order to identify EM’s actions to overcome deployment obstacles, we reviewed a memo from the Assistant Secretary for EM that directed a number of actions to increase deployment, and we obtained information about the status and results of these actions. We also interviewed OST managers to identify additional actions under way within OST and obtained related documentation. To assess the adequacy of these actions, we compared the EM and OST actions to the obstacles to deployment that we had identified. We provided a draft of this report to DOE for its review and comment, and a draft of chapter 2 and appendix II to the Department of Defense and EPA for their review and comment. DOE’s and the Department of Defense’s written comments and our responses are included in appendixes III and IV. We performed our review from August 1997 through September 1998 in accordance with generally accepted government auditing standards. While OST has initiated 713 technology development projects, we estimate that EM has deployed between 88 and 130 of these projects, for an overall deployment rate of 12 to 18 percent. In contrast, OST has reported that 152 projects have been deployed one or more times, for an overall deployment rate of 21 percent. OST’s overstated deployment information is the result of several factors, including its rapid compilation of deployment data in response to congressional requests and the lack of a formal definition of what constitutes a deployment. Most organizations we contacted, including some private technology developers, did not track deployment data comparable to OST’s. We contacted eight government programs and two private sector programs engaged in environmental technology research and found only two that could provide data on deployment. In comparing data from these two organizations and OST, we found that OST’s deployment rate was close to that of the 2-year-old Environmental Security Technology Certification Program (ESTCP) in the Department of Defense and somewhat lower than the rate for the 12-year-old Superfund Innovative Technology Evaluation (SITE) program in EPA. However, it is important to recognize that the value of deployment rate comparisons with other organizations is limited. To assess the overall performance of a research and development (R&D) program like OST, other measures in addition to deployment would be relevant. OST developed deployment data in response to a November 1996 request from the Subcommittee on Oversight and Investigations, House Committee on Commerce. As of January 1998, OST’s database showed that EM had initiated 713 technology development projects since OST’s inception. On the basis of our verification and analysis of these data, using a 95-percent confidence interval, we estimate that EM actually deployed from 88 to 130 projects, to achieve an overall deployment rate of 12 to 18 percent for the 713 projects. In contrast, OST has reported that 152 of the 713 technology projects initiated since its inception have been deployed. Thus, according to OST’s data, about 1 in 5 OST technologies have been deployed one or more times, for an overall deployment rate of 21 percent. We also found that OST had overstated the number of deployment instances reported for each technology project. OST’s database listed a total of 283 deployment instances for the 152 projects claimed as deployed. We estimate that of the 283 deployment instances claimed by OST, only 137 to 216 have actually occurred. Table 2.1 lists OST’s data, the error rates we found, and our estimates of actual deployments based on the error rate found in our sample of 30 projects. OST’s inaccurate deployment data resulted from several factors. Specifically, OST compiled deployment data quickly, in response to a congressional request that came 7 years after the program’s inception, because it had not previously maintained comprehensive data. In addition, the lack of a formal definition for deployment led to differing understandings among the focus area personnel responsible for compiling the data. Finally, OST has begun only recently to establish procedures for entering and updating project data. If such procedures had been in place early on, they would have uncovered the need to formalize the definition of deployment. Some inaccuracy in OST’s deployment data may have been due to the fact that the data were compiled quickly. OST prepared deployment data in response to a November 1996 request from the Chairman of the House Committee on Commerce. Previously, OST had not maintained comprehensive deployment data on its projects. Instead, OST tended to focus its performance measures on completed demonstrations. For the November 1996 request, OST gathered deployment data for its projects over a period of several months and provided the information to the Chairman in April 1997. At that time, OST reported that 150 projects had been deployed and an additional 41 projects had been selected for use in the future (for a total of 191 past and future deployments). Another reason for OST’s inaccurate data has been the lack of a formal definition of deployment, leading to different understandings among the focus area personnel who collected the deployment data about what should be counted as a deployment. According to OST managers, while gathering data to respond to the Committee’s request, OST headquarters officials told focus area personnel to refer to an earlier definition of implementation for the meaning of deployment but did not distribute new written guidance. This definition, which OST had formalized and distributed in April 1996, defined implementation to mean that the technology was used or selected for use to meet specified user performance measures (e.g., completion of an assessment or treatment of waste for disposal). However, officials of the Subsurface Contaminants Focus Area provided us with a definition of deployment that they received along with the instructions for responding to the Committee’s data request. This definition stated that the number of deployments means the number of “hot” demonstrations (that is, demonstrations in radioactive environments) and that deployment site means the location of a hot demonstration. We found that OST focus area personnel entering the data frequently regarded demonstrations as deployments. For example, OST counted as a deployment the use of a characterization technology called Laser Ablation/Mass Spectroscopy at the Pacific Northwest National Laboratory in Washington State. In response to our questions, site contractor officials stated the technology’s use at the laboratory had been a demonstration in which data derived from the laser technology were compared with data derived from a conventional technology. At this time, the site cannot rely upon the laser technology to accomplish its goals for characterization. While OST has issued a definition of deployment and is taking other steps to improve data quality, written procedures for data verification have not yet been developed. In August 1998, the Acting Deputy Assistant Secretary for OST issued a memo that formally defined deployment. The definition appropriately emphasized that deployments must accomplish site objectives, such as the completion of assessments, cleanups, or the treatment and disposal of wastes. The memo stated that this definition is to be used for performance measurement. OST has also completed a data verification effort for those projects considered deployed during fiscal year 1997. It used verification by site personnel and other data sources to improve the accuracy of this portion of the data. According to OST officials, the office intends to continue similar verification efforts in the future. However, these data verification plans are not reflected in OST’s draft procedures for its database, which do not specify a method of data verification. The procedures, drafted in January 1998, identify OST’s focus areas as responsible for entering data and ensuring their quality and completeness. These procedures also require that the data on ongoing projects be updated at least once per quarter (every 3 months). However, the draft procedures do not identify any means by which the data are to be verified or spot-checked for accuracy. While site technology coordination groups can comment on the deployments listed in the database, the procedures do not state any requirement for data review and concurrence by these groups or other site officials. The Acting Deputy Assistant Secretary for OST told us that OST plans to obtain further advice about verification methods and then develop written procedures. OST has not yet determined whether, or to what extent, to verify data from the years prior to 1997 because of the time and resources involved. According to the Acting Deputy Assistant Secretary for OST, the office is seeking clarification from the House Committee on Commerce on the degree of accuracy or certainty needed. We compared the deployment rate for OST’s technologies with the deployment rates for technologies sponsored by EPA’s SITE program and the Department of Defense’s ESTCP. The SITE program is engaged solely in the environmental technology demonstration and implementation stages of R&D. Similarly, ESTCP demonstrates and validates technologies and funds environmental technologies that have progressed to the stage at which field demonstrations are warranted. Taking into account the limitations of this comparison, OST’s deployment rate for projects at comparable stages of development falls between the rates of the two organizations that provided data, as shown in table 2.2. (App. II discusses in detail how we developed each comparison.) Comparisons of OST’s deployment rate with the rates of other organizations must be viewed with caution when assessing how well EM is doing in deploying OST-developed technologies. We found few organizations that engage in the range of environmental research OST performs, and no organization we contacted routinely tracked deployment data on its projects. Data provided by the two organizations differed widely in source and composition. Finally, many individuals we contacted question whether a deployment rate is a sufficient benchmark for successful R&D. Most organizations we contacted, including some private technology developers, did not track deployment data comparable to OST’s. Of the eight government programs and two private sector programs engaged in environmental technology research we contacted, only the SITE program and ESTCP could provide data on deployment. Even these two programs needed to compile their information so that it could be expressed as deployment rates. Table 2.3 shows the entities that we contacted. Furthermore, we found that only one of the other government programs listed in table 2.3 engaged in nearly the full range of environmental R&D that OST performs. OST’s R&D includes basic science research, applied research and engineering development, field testing and demonstration, and implementation by the end user (commercialization). Most of the governmental organizations we contacted performed either the early stages of R&D or the later stages, but not both. Technology development efforts undertaken at the early stages have more unknowns and are likely to involve a greater risk of failure than efforts at the later stages. Since we would expect performance results to differ for each stage, meaningful comparisons can only be made among projects or programs that are at similar stages of R&D maturity. Two organizations provided us with very different types of data. EPA’s SITE program had accumulated survey data on the number of contracts their technology vendors had obtained over about 8 years. We agreed that a contract for use could be considered deployment of the technology. As to be expected, the survey responses were less than 100 percent, unlike the OST and ESTCP data, which include all of these agencies’ technology projects. Therefore, the data from EPA’s SITE program are incomplete, and the deployment rate for SITE could actually be higher. The Department of Defense’s ESTCP provided a description of the transition (deployment) status for all of its projects from the program’s first 2 years of existence. Since ESTCP is a relatively new program, its deployment data are based on a limited number of projects and may be less representative of the program’s future performance. We did not verify the accuracy of these organizations’ deployment data, but we reviewed their available project summaries and believe the organizations’ approaches were reasonable responses to our request. Nevertheless, differences in how the programs defined deployment, and whether they counted incomplete projects, will affect computed rates. As we have previously reported, measuring the performance of R&D programs is difficult. Performance measures used in other federal R&D programs include the scientific peer review of projects, numbers of patents issued, and studies of publications. Recent R&D management literature suggests that certain measures, such as the number of patents issued, are best suited to earlier stages of research, while outcome measures, such as deliverables and customer satisfaction, are more relevant for later-stage research. In this context, a deployment rate measure would be most useful when applied to more mature projects. At the same time, program managers need to assess how successful the program has been at selecting early-stage projects with high potential for future payoff. Officials in a number of programs we contacted told us that deployment has only recently been raised as a possible performance measure. Furthermore, programs performing earlier stages of R&D were less likely to have any deployment data. Developers of later-stage technologies believed that the deployment rate is an incomplete performance measure, and that cost savings or some measure of dollar impact should also be used to evaluate program success. EM is considering developing a performance measure that would assess cost savings from the use of innovative technologies. EM and OST recognize that deployment is not the only relevant measure of success in technology development. We reviewed performance measures established for OST for fiscal years 1994 through 1997 and found that completing demonstrations of technologies and the number of technologies made available for use—that is, number that have completed development—were the main performance measures used. In fiscal year 1998, OST’s performance measures are (1) demonstrate 35 new technologies, (2) make 40 alternative technologies available for use with cost and engineering data, and (3) perform 49 deployments of new technologies. As described in more detail in chapter 4, performance measures for deploying innovative technologies are also being applied to EM’s field operations offices in fiscal year 1998, and OST is considering developing additional performance measures for its focus areas that address technologies in various stages of development. As EM’s cleanup program has matured, several of the obstacles to using innovative technologies reported previously by us, EM, and others have been addressed. For instance, DOE sites and their regulators have improved their working relationships, and, in cases where innovative technologies were selected, DOE sites have found ways to address regulator concerns about whether these technologies will achieve required objectives. However, some obstacles, internal to EM and OST program operations, continue to slow the deployment of innovative technologies, and, in some cases, have led OST to spend millions of dollars for technologies that the cleanup sites do not want. The most significant and continuing of these internal obstacles has been EM’s and OST’s failure to involve users sufficiently in the design and development of technology targeted for use at the cleanup sites. As a result, OST has developed generic technologies that do not meet site-specific needs or that require modification to make them usable by the site. However, EM has not clearly defined responsibilities and funding sources for modifying technologies among OST and potential technology users. Furthermore, OST still has no clearly defined role in helping sites select the appropriate technology and infrequently provided technical assistance in the cases we reviewed. Several factors contribute to these problems. First, prior to 1996, OST had not comprehensively assessed users’ technology needs and linked these needs with technology development efforts. Second, OST has not fully implemented its system for monitoring, and if necessary, modifying or terminating ongoing technology development projects—a system that would require interaction with technology users. DOE’s field and contractor staff face a number of challenges when attempting to use an innovative cleanup technology. Past reports by us, EM, and advisory groups have catalogued the challenges: the perceived risks of exceeding projected costs or failing to meet time schedules; the need to convince regulators and stakeholders of the advantages of innovative technology; and technical problems, including the need to modify a technology to make it fit a specific situation. However, as the EM technology program has matured and site personnel, regulators, and stakeholders have become more aware of the benefits of using some innovative technologies, some obstacles have diminished in importance. Furthermore, when the use of a new technology is clearly and significantly advantageous, cleanup sites make a strong effort to overcome any obstacles to its use. Specifically, when regulators and stakeholders are concerned about a technology’s effectiveness, sites have provided additional data or testing and, occasionally, modified technology to satisfy some concern or implemented an innovative technology in phases to obtain performance data. For example, according to Hanford officials, using a new technology to encapsulate certain carbon-based waste would be much less costly than incinerating it. State regulations, however, called for incinerating such waste. Nonetheless, Hanford persisted and obtained a waiver from the state to encapsulate the waste. At Oak Ridge, DOE and its contractors wanted to use a frozen soil barrier to contain a relatively small pool of water that had been contaminated with reactor waste. However, regulators and stakeholders were skeptical that this innovative technology would work and be cost effective. Oak Ridge demonstrated the technology to obtain cost and performance data and provided this information to regulators and stakeholders. The technology has since gained wide acceptance by these groups. Some technology may have to be modified to satisfy regulatory concerns. For example, Hanford officials wanted to test an innovative technology for cleaning up contaminated soil, which they believed was better than current methods. However, regulators were concerned about the possible expulsion of carbon tetrachloride contaminants into the air. Hanford officials convinced the regulators to allow them to experiment with the new technology by offering to add a filter to the equipment to catch any contaminants. The modification was a low-cost and easy addition to the equipment. In some cases, sites implement a technology in phases to obtain performance data and to assure themselves and convince regulators and stakeholders of the technology’s viability. For example, OST funded the development of a robotics device, called Houdini, that could help clean up waste in tanks. Oak Ridge, with the help of the manufacturer, adapted Houdini to help clean up radioactive waste stored at the bottom of the site’s large tanks. However, because Houdini had never been used to clean up radioactive waste, no information was available on the device’s performance and reliability. Oak Ridge therefore had to implement Houdini in phases using nonradioactive “cold” testing; followed by treatability tests in a lower radiation environment; and finally, “hot” testing on the radioactive waste in its tanks. Field officials also told us that the projects in which OST and an EM operating group get involved as a joint venture seem to work well. In these cases, OST provides funding and some technical assistance, and the operating group also provides funding and implements the project. If there are also partners from industry, they further enhance the chances for success. For example, at Hanford, OST and EM’s environmental restoration group are participating in a large project to demonstrate a number of technologies that can be used to put Hanford’s old, shutdown reactors into safe interim storage. Hanford officials were convinced that if the demonstrated technologies were successful, the time needed to prepare the reactors for storage could be cut by 7 years. The demonstration project started in 1996, with contributions totaling about $8 million from OST and about $16 million from the environmental restoration group. However, the project did not have the extra money to make needed refinements and modifications to technologies. Consequently, Hanford officials suggested partnering with private contractors who would assume the risk and cost of getting the technologies to perform. OST’s Deactivation and Decommissioning Focus Area, which routinely works with the private sector, helped to bring about this partnership with private contractors. As of July 1998, the project had successfully demonstrated 20 technologies and deployed 13 of them at Hanford’s C Reactor, two other Hanford reactors, and a number of other DOE reactors throughout the complex. In addition, the technologies have been transferred to the commercial reactor sector and will be used to help put the nuclear power plant in Chernobyl, Ukraine, into safe storage. Despite the progress that has been made, some obstacles internal to EM and OST operations continue to slow the deployment of innovative technologies. In particular, OST has developed technologies that tend to be generic solutions to cleanup problems and, if usable at all, have to be modified to fit a site’s specific problem. These problems occur in part because OST had not, until 1996, comprehensively assessed the technology needs of the cleanup sites and has not involved potential technology users in the development of technology that might be used to address specific cleanup problems. Without user involvement, there have been no identified customers for some of the technology that OST has sponsored. For example, of the 107 technologies that OST has completed, 31 technologies, costing $71 million to develop, have not been used by cleanup sites. According to EM field and contractor personnel responsible for waste cleanup, in many cases, OST technologies do not meet their needs. They said that OST has many times assumed that “one-size-fits-all” and therefore has developed generic solutions to cleanup problems. However, these solutions either do not fit a site’s specific needs or must be modified before they can be used. For example, Fernald workers needed portable equipment that would allow them to characterize contamination within buildings without climbing ladders to obtain samples from contaminated areas. OST said that, although its laser-induced fluorescence imaging equipment had not been field-tested, the equipment had been designed to meet needs such as Fernald’s. However, when Fernald workers attempted to use the equipment, they found that it was not ready for field use. It was cumbersome (not really “portable”) and light interfered with measurement readings. As a result, the equipment was not usable and was returned to the manufacturer for modifications. Consequently, Fernald personnel continued to take samples from the contaminated building areas by hand. Although they realized that the OST equipment had not been thoroughly tested before they tried it, Fernald officials said, they believed that if OST had involved them in the design and development of the equipment, the problems would have been avoided, or at least identified and corrected earlier. Similarly, officials at DOE’s Hanford site tried two OST technologies that promised to support faster remediation of contaminated soil but had to reject them because they were not designed to work in Hanford’s arid soil. The officials said that the concept for faster remediation of contaminated soil was attractive and probably would have been acceptable to Hanford’s regulators, but the generic design of the technologies did not meet Hanford’s specific needs. Furthermore, some site officials said that they would like to use some OST technologies, but the technologies require modification to fit the site’s situation. They pointed out that it is not clear who should make and pay for these modifications. For example, a project manager at DOE’s Savannah River Site told us that he would like to use more innovative technology in his projects, but it is unclear who is responsible for making site-specific modifications, and his program does not have funding to make such modifications. At Hanford, officials were interested in using OST’s Electrical Resistance Tomography to help detect leaks in their high-level radioactive waste tanks. (Hanford has 67 known or suspected leaking tanks.) However, a Hanford official said that the technology needed substantial fine-tuning to make it work on the Hanford tanks, and no funding was available to pay for this. He said that it was unclear who is responsible for funding modifications to OST technologies. When only minor, inexpensive modifications are required, site representatives said that they have made and usually paid for them. But other technologies that are of interest to sites would require more extensive and more expensive modifications. Without a clear policy on who is responsible for modifying the technology and paying for the modification, the sites are likely to reject the innovative technology and select a known alternative. Until its reorganization in 1994, OST did not involve the cleanup sites in identifying technologies that need to be developed and did not conduct comprehensive needs assessments until 1996 and 1997. Therefore, most of the technologies developed through OST were not based on a comprehensive assessment of the technology needs of those responsible for cleaning up DOE waste sites. Instead, OST consulted with its developers at the national laboratories in deciding which technologies it would sponsor to solve sites’ cleanup problems. These technical solutions, according to potential technology users, tend toward the “one-size-fits-all” development philosophy. We reported in 1994 that technology needs had not been comprehensively identified to allow prudent research decisions nor had various environmental program offices in headquarters and in the field worked together effectively to identify and evaluate all of the possible technology solutions available. In 1995, and again in 1996, the Environmental Management Advisory Board (EMAB) told the Assistant Secretary of Environmental Management that the lack of a comprehensive assessment linking identified needs with technology development efforts was a “primary barrier” to technology deployment. EMAB said that technology development and deployment must be linked together as a single system. Site technology coordination groups, established in 1994, made early attempts to assess the needs of potential technology users. However, because OST considered data from these early surveys unreliable, it and the site groups developed guidance and worksheets for a more comprehensive assessment, which the site technology coordination groups carried out in October 1996. In October 1997, an updated needs assessment and a database that matches technology needs with appropriate existing technology or the future efforts of technology developers was completed, according to the director of OST’s Office of Technology Systems. In addition to not involving the cleanup sites in identifying technology needs, OST has not sufficiently involved users in designing technologies and monitoring their development to help ensure that they meet users’ needs. In 1992, we recommended that EM institute a technology development management system with explicit decision points at which the technology would be assessed to determine whether development should continue or be terminated. OST established its “Technology Investment Decision Model” (called the “gates system”) to do this. The gates system satisfies our 1992 recommendation and was intended to be “a user-oriented decision-making process for managing technology development and for linking technology-development activities with cleanup operations.” However, OST has not fully implemented the gates system and thus cannot be certain that appropriate technology is developed to meet the needs of DOE’s cleanup sites. Under OST’s gates system, the focus areas are to assess a technology’s development at six stages, from basic research through implementation. At each stage, the focus area is to make a go/no-go decision, with input from potential users. The critical decision points include the following: Gate 1: Entrance Into Applied Research Stage. To pass through gate 1 and enter this stage, a proposed technology must be shown to address national interests and priority environmental needs. EM guidance states that if a technology does not address a specific need, it should not pass through gate 1. Gate 2: Entrance Into Exploratory Development Stage. To pass through gate 2 and enter this stage, a technology has to be linked with the specific needs of an identified user. Gate 3: Entrance Into Advanced Development Stage. To pass through gate 3 and enter this stage, the technology must be able to meet an identified user’s specific performance requirements. In addition, it must be documented that the research to develop the technology is expected to produce results consistent with the user’s time frame for deployment and implementation. Gate 4: Entrance Into Engineering Development Stage. To pass through gate 4 and enter this stage, the technology must be shown to meet the user’s specific needs in a timely manner. In addition, it must be documented that the proposed innovative technology will be more cost-effective than current methods or other emerging technology. Gate 5: Entrance Into Demonstration Stage. To pass through gate 5 and enter this stage, the identified user for the technology must make a commitment to deploy the technology if it meets performance requirements. In addition, the user must agree to share in the cost of and the responsibility for demonstrating the technology. Gate 6: Entrance Into Implementation Stage. To pass through gate 6 into implementation, the technology must successfully complete a “real world” demonstration, either at a DOE site or another location, using actual waste streams and/or anticipated operating conditions. In addition, it must be documented that the technology has proven to be viable, cost-effective, and applicable to the users’ needs. As this discussion of the gates system shows, OST’s focus areas must identify a user for the technology in the early stages of development. Furthermore, this user must stay involved throughout the development process to ensure that the technology will meet the needs and implementation schedule of the user. OST, however, has not fully implemented its gates system to involve potential users in the assessment of technology that it is developing, and, in some cases, OST has not identified an end user for the technology. Furthermore, a review by EM and EMAB representatives, completed in late 1997, revealed that OST’s focus areas do not consistently use the gates system and do not consistently involve potential technology users in technology development decisions. EMAB has pointed out in numerous reports that OST has failed to rigorously apply the gates system. EMAB has stated that OST should use the gates system to identify and terminate technologies that have no identified customer, are not cost-effective, or have limitations that may increase the risk of failure when used. According to the Chairman of EMAB’s Committee on Technology Development and Transfer, OST officials told him that they did not rigorously apply the gates system because it yielded results that OST and technology developers at the laboratories did not like—that is, indicating that some technology projects should be terminated. Similarly, representatives of one of OST’s focus areas told us that OST does not rigorously use the gates system because it would force OST to terminate technologies that have no identified customer, do not meet users’ needs, are technically limited, or have some other fault. The manager of the Subsurface Contaminants Focus Area told us that his focus area had rigorously applied the gates system and terminated some technologies, which led to a confrontation with the laboratories developing the technologies. The Director of OST’s Office of Technology Systems told us that the gates system was never fully implemented because staff were confused by other evaluations and OST’s reorganization into focus areas, which were taking place at the time the gates system was instituted. He said that the gates system was currently not being used but would be reinstituted in the future. According to the Acting Deputy Assistant Secretary of OST, the criteria of the gates system are still valid, but when focus areas tried to use the gates system, their approach was inappropriate and did not work. Specifically, he stated that focus areas set up panels to periodically review their projects according to the gate criteria. Instead, the gates system was intended to be used on an ongoing basis, so that the focus areas could determine whether requirements for the various stages of technology development, including user involvement, had been met. According to this official, OST’s intent was that technology developers and technology users have frequent interaction. OST has not fulfilled its role of providing technology users with the technical advice and assistance that they need to identify solutions to cleanup problems and to help implement those solutions. Focus areas’ ability to provide technical help varies widely, although this was a principal mission when these groups were established in 1994. Some site officials responsible for cleanup told us that they are reluctant to try new technologies without a reliable source for advice and assistance, but some are reluctant to seek help from the focus areas because they do not trust the focus areas’ abilities. EM established the focus areas in part to provide technology users with technical advice and assistance. However, EMAB has consistently noted the lack of technical knowledge in some focus areas and suggested that this problem be addressed. Similarly, we found that cleanup sites are skeptical of the technical expertise of some focus areas and rarely call upon them for assistance. EMAB believes that the focus areas need to become experts not only in OST-sponsored technology but also in other domestic and foreign technology that might help solve waste cleanup problems. EMAB reported in January 1998 that some focus areas do not know the state-of-the-art technology for their area. The Chairman of EMAB’s Committee on Technology Development and Transfer told us that the Tanks Focus Area and the Deactivation and Decommissioning Focus Area seem capable, but he said that EMAB is concerned about the capability of the Subsurface Contaminants Focus Area, which has the largest workload by type of waste problem. During our visits to five cleanup sites, we found that the sites infrequently sought technical assistance from OST and its focus areas. Site officials said that technical assistance would be helpful in deploying new technologies, but some are not convinced of the focus areas’ technical expertise. Furthermore, they preferred to go directly to a vendor for technical assistance because the vendor was much more knowledgeable than OST. In 1994, we recommended that OST be given a formal role in sites’ selections of technologies to solve cleanup problems. For example, OST could formally take part in sites’ feasibility studies to identify and analyze technologies that could potentially solve a specific waste cleanup problem and to help a site decide which technology to use. However, some site officials told us that OST and its focus areas are not familiar enough with their sites’ waste cleanup problems and appropriate solutions. They said that our recommendation was not taken because site officials are skeptical of OST’s ability to provide quality technical advice and assistance and therefore are reluctant to allow OST more of a role in selecting cleanup technologies for their sites. The Acting Deputy Assistant Secretary for OST told us that he is aware of this problem and has directed the focus areas to become more technically competent and supportive. He said that providing technical assistance should be routine for the focus areas; they should be out in the field providing this help, not waiting in the office for the sites to call them. He emphasized that if the focus areas are not able to provide expert technical assistance, he will look to other groups, perhaps the national laboratories, to provide needed technical assistance. EM management devoted little attention to the deployment of innovative technologies until a congressional oversight hearing in May 1997 criticized EM’s performance in deploying technology. Following the hearing, the Assistant Secretary of EM issued a memorandum in July 1997, directing OST and other EM offices to initiate specified actions designed to facilitate technology deployment. Some of these actions have already been completed, and the remainder were to be completed by September 30, 1998. These actions establish responsibilities, require the development of performance measures for technology deployment, establish the Technology Acceleration Committee of upper-level EM and field managers, require sites to develop deployment plans, and continue the Accelerated Site Technology Deployment program that funds individual projects. OST has additional initiatives under way, including establishing technology-user steering committees and developing multiyear plans for technology development. However, EM’s efforts only partially address the internal obstacles limiting deployment. On the positive side, EM has established deployment performance measures for field sites and required sites to develop deployment plans. Users’ involvement in developing overall plans and priorities for OST’s work is also improving. On the other hand, although the initiatives provided for upper management attention through the Technology Acceleration Committee, the future of this Committee is uncertain because of the departure of EM’s Assistant Secretary, who established it. According to EM officials, a broader executive committee addressing EM issues may take its place. EM did not carry out its plans to include deployment in the annual performance expectations of its senior managers, considering their membership in the then-active Technology Acceleration Committee to be sufficient to hold managers accountable. In addition, EM has not yet improved developer-user cooperation for individual projects. Specifically, EM’s initiatives do not require OST to use its existing decision process for technology development (the gates system), which would require user involvement at various stages in the development process. Furthermore, EM has yet to determine how it will provide deployment assistance to cleanup sites to (1) more routinely provide technical assistance in selecting and implementing innovative technologies and (2) make modifications to completed technologies to better meet sites’ needs when it is cost-effective to do so. In a July 1997 memo, EM’s Assistant Secretary stated that technology deployment is the responsibility of all senior EM management, including the managers of EM’s operating groups, OST, and field offices. EM management had not previously emphasized technology deployment, and this was the first formal assignment of responsibility for deployment. The Assistant Secretary also directed that performance measures based on technology deployment be established for those groups involved with deployment efforts and be included in the performance expectations for senior managers. In response, EM has instituted or is planning performance measures addressing the deployment of innovative technologies at several levels: (1) DOE field sites undergoing cleanup, (2) contractors that manage the DOE field sites, and (3) OST and its focus areas. Field sites were also required to submit deployment plans addressing both their overall approach to utilizing innovative technologies and their plans to achieve deployments in specific cleanup projects. EM continues to refine its performance measures and has asked EMAB for advice about improving performance measures at the various levels to help increase deployment. In responding to our written inquiry to EM management in March 1998, the Acting Deputy Assistant Secretary for OST stated that “in analyzing the most appropriate and optimum way” to accelerate technology deployment, EM management concluded that deployment goals can best be achieved by holding those at the point of implementation of new technology—the field sites—responsible for deployment. EM has established two indicators to measure field sites’ efforts to use innovative technology to clean up waste sites: (1) the number of technologies deployed annually and (2) life-cycle cost savings resulting from the use of innovative technology. For the present, annual targets for the number of deployments are based on the amount of annual EM funding a site receives. EM established a target that requires field offices to agree to deploy one new technology for every $100 million in annual funding that they receive. For example, DOE’s Oak Ridge site will receive about $600 million in EM funding in fiscal year 1998 and is therefore expected to use six new technologies a year in its effort to clean up nuclear waste. For fiscal year 1998, field sites have agreed to deploy a total of 49 new technologies, which can be from OST or other sources. OST believes that the majority of these new technologies will be ones that it has sponsored. Field sites must also submit site-specific deployment plans for innovative technologies. The plans, most of which were submitted in May and June 1998, describe the sites’ overall approaches to deploying innovative technologies, such as processes for identifying deployment opportunities and involving regulators. The plans also specify opportunities to deploy innovative technologies in the sites’ cleanup projects. For instance, the plans describe the schedule for technology deployments, projected benefits from using the technologies, and funding requirements. In the future, EM may establish performance targets for field sites that are based on the amount of savings that would be produced by using innovative, rather than conventional, technology over the life of a project. These measures were not established in fiscal year 1998 because EM lacked a standard methodology for calculating cost savings. However, in March 1998, EM completed a draft of a standardized process for calculating these savings. The need for contract incentives for the use of innovative technologies has been broadly recognized by EM managers in headquarters and the field. Each of the five sites that we visited had used performance measures addressing deployment for the site’s management contractor. Some sites have experimented with different approaches to determine which measures work best. For example, at Savannah River, DOE tried performance-based incentives for its contractor in 1995 and 1996 that were based on the number of innovative technologies used and the associated cost savings; then, in 1997 and 1998, it switched to incentives based on the cost savings achieved—regardless of whether conventional or innovative technologies were used. According to DOE’s Assistant Manager for Environmental Quality at Savannah River, over half the cost savings that the contractor achieved in environmental restoration in 1997 came from the use of innovative technologies, and he believes that the cost savings measure has worked the best in providing incentives for using innovative technologies. At Lawrence Livermore National Laboratory, which participates in a number of OST technology projects, the contractor’s performance measures address both using innovative technologies in the laboratory’s cleanup activities and supporting their use at other sites. OST’s performance will also be measured on the basis of technology development and related deployment. For example, OST’s performance goals for fiscal year 1998 include demonstrating 35 new technologies; finishing the development of 40 “alternative” technologies; and, along with the cleanup sites, taking responsibility for the 49 deployments of technology to be used in waste cleanup projects. According to the Acting Deputy Assistant Secretary for OST, several additional performance measures are under consideration for OST’s focus areas to help ensure that the technologies still in development are “deployable” when they are completed. These measures include whether the focus areas’ projects address high-priority technology needs and whether end users consider the technologies under development to be viable solutions to their needs. In a June 1998 meeting, EMAB presented its analysis, prepared at EM’s request, of how EM should improve performance measures for technology development and deployment. Among other things, EMAB emphasized that the use of performance measures must be supported by EM’s leadership and that performance measures for EM’s technology research, development, and deployment must be integrated with similar measures for site cleanup programs. EMAB also suggested that EM’s Technology Acceleration Committee review and improve existing research and development performance measures. The Acting Deputy Assistant Secretary for OST told us that EMAB’s advice would be considered in designing additional performance measures for OST’s focus areas. As of September 1998, EM was still in the process of identifying and improving performance measures to help ensure that cost-effective innovative technologies are used for waste cleanup. EM has established a mechanism—a user steering committee for each of OST’s focus areas—to engage technology users in setting overall plans and priorities for the work of the focus areas. The committees include the senior managers of DOE field sites (such as sites with tank waste for the Tanks Focus Area) and headquarters officials appropriate to the focus area. These committees are to work on budgeting, planning, and setting directions for the R&D investments of the focus areas. The committees are modeled after the practice of the Tanks Focus Area, which set up such a committee in 1996. The committees for the other focus areas began organizing in February 1998. Among other things, user steering committees will help focus areas develop their multiyear program plans. OST is initiating these 5-year plans to manage and measure focus areas’ performance under the requirements of the Government Performance and Results Act of 1993. OST plans to complete the first set of plans by December 31, 1998, and to develop the plans annually for the upcoming 5 years. In addition, at their meetings in the spring of 1998, the user steering committees provided input to the focus areas’ proposed fiscal year 2000 budgets. While the EM and OST initiatives have begun to address internal barriers to the deployment of innovative technologies, continued attention by EM’s upper management to deployment is not ensured. The attention may not continue because (1) the future of the Technology Acceleration Committee is uncertain and (2) deployment measures have not been included in the contracts of EM’s senior managers. In response to the July 1997 memo by EM’s Assistant Secretary, the Technology Acceleration Committee, composed of senior-level managers from EM headquarters and the field, was organized and met in September 1997. This Committee’s purpose is to “provide corporate leadership to ensure an aggressive effort to deploy alternative and more effective technologies through full integration of the technology development and user organizations.” According to the Committee’s draft charter, it would meet at least once per quarter. The Committee met again in January 1998, but has not met since. According to the Acting Deputy Assistant Secretary of OST, the Committee has been inactive because it reported directly to EM’s Assistant Secretary, who left the Department in January 1998. According to the Acting Assistant Secretary for EM, EM is considering establishing a broader executive committee of senior managers to address EM issues, including the deployment of innovative technologies. To date, the Technology Acceleration Committee has increased communication among OST, EM line offices, and field offices. It has discussed issues such as clarifying deployment responsibilities, involving technology users throughout the technology development process, and improving incentives for contractors. The Committee also directed the establishment of user steering committees for focus areas. Because the user steering committees have members from EM’s headquarters and field offices, we believe that the existence of the Technology Acceleration Committee facilitated this innovation. The Acting Deputy Assistant Secretary of OST agreed with the importance of the Committee but thought that a broader executive committee of senior officials could address technology deployment and other EM issues. Even with these improvements, unresolved issues affecting technology deployment still exist and could benefit from the attention of EM’s upper management. As noted above, EMAB suggested that the Technology Acceleration Committee review and improve R&D performance measures. In addition, the site-specific deployment plans state that a number of issues need to be resolved, such as learning the possible effects of EM’s increased use of fixed-price contracting and private financing (referred to as “privatization”) on the use of innovative technologies. For example, the deployment plan of the Ohio field office raises privatization as a policy issue requiring guidance from headquarters, stating that most fixed-price bidders will use technologies with which they are familiar. As a result, the plan states, technologies that were developed at considerable expense may not be deployed because of bidders’ reluctance to assume a risk of failure. In our visits to field sites, we observed instances in which the use of OST-developed technologies was uncertain because EM planned to solicit fixed-price bids for cleanup work and the technology selected would depend on the choice of the private firm winning the contract. For instance, the Houdini robot was designed for retrieving radioactive wastes from silos at the Fernald site. However, when EM decided to solicit fixed-price bids for waste retrieval from Fernald’s silos, the Houdini robot was instead used in the radioactive waste tanks at Oak Ridge. Fernald had not yet received bids at the time of our visit, and environmental remediation officials told us that the companies bidding for this work will define which waste retrieval tools they would use—Houdini might or might not be included. In his July 1997 memo, the Assistant Secretary for EM stated that, beginning in October 1997, performance expectations for EM’s senior managers in headquarters and the field would be developed to require the deployment of alternative and more effective technologies. However, the Acting Deputy Assistant Secretary for OST, in response to our written inquiry to EM management, stated that technology-related performance measures would not be included in senior managers’ performance contracts and that senior managers are held responsible for technology deployment through their membership in the Technology Acceleration Committee. However, as noted above, this Committee has not met since January 1998, and its future is uncertain. EM’s and OST’s current efforts and initiatives only partially address the internal obstacles to deployment that were discussed in chapter 3. Specifically, the new initiatives do not reinforce the need for OST’s focus areas to use the technology development gates system and do not provide for OST’s deployment assistance to help sites select new waste-cleanup technologies or modify existing technologies for site use. Although EM’s initiatives involve users in setting the overall plans and priorities of OST’s focus areas, they do not fully address the need for detailed user input on individual technology projects. The Acting Deputy Assistant Secretary for OST told us that the focus areas need to use OST’s existing gates system to obtain user input into the design and development of cleanup technology. Furthermore, he said that it is necessary to use this system to help prevent the development of technologies that do not meet sites’ needs, a problem discussed in chapter 3. However, in contrast to these statements of support for the gates system, we found that EM’s new initiatives do not require its use nor identify an alternative means to ensure that technology developers and users communicate and cooperate about individual technology development projects. EM and OST initiatives have not fully addressed two areas that must be considered when deploying innovative technologies: (1) providing technical assistance to sites on innovative technologies and (2) modifying completed technologies for use at specific sites. One potential vehicle for providing deployment assistance—OST’s new Accelerated Site Technology Deployment program—has not increased technical assistance in most cases and did not have the benefit of information that EM now has that EM could use to improve its priority setting for deployment assistance. EM and OST have not yet identified sources of expertise and procedures or developed a policy for routinely providing technical assistance on innovative technologies to DOE sites. OST recognizes that focus areas should more frequently provide technical assistance to sites when they are selecting and beginning to implement technologies and that this assistance should address innovative technologies developed by other sources as well as by OST. EMAB has questioned whether the focus areas currently have the expertise needed to provide such assistance. The Acting Deputy Assistant Secretary for OST acknowledged that the focus areas vary in their degree of expertise and ability to provide technical assistance. He noted that the Tanks Focus Area works closely with one of the national laboratories, which can provide in-depth expertise, and stated that the other focus areas need to develop a roster of technical experts who can be consulted for particular site cleanup problems that the focus areas cannot solve. Furthermore, the Acting Deputy Assistant Secretary stated that performance measures that encourage focus areas to provide technical assistance will be needed. Some initial steps have been taken to involve OST in selecting technology for environmental restoration sites. In fiscal year 1998, the Office of Environmental Restoration began including OST in its processes for providing sites with information and support for technology selection decisions. OST is contributing funding and the technical support of its focus area staff to this program. However, OST does not have a similar involvement with EM’s Office of Waste Management or Office of Nuclear Material and Facility Stabilization. EM lacks a policy on whether OST should provide technical assistance for major cleanup actions routinely or only if requested by a site. While the management-level Technology Acceleration Committee reached an “understanding” that the focus areas’ role should include technical support to end users for deployment, the Committee did not identify resources, procedures, or policies for such technical assistance. According to the Acting Deputy Assistant Secretary for OST, policies and procedures for providing technical assistance will be one of the elements addressed in the business system redesign currently under way in OST, and procedures may be completed by the end of 1998. The initiatives do not address a barrier to deployment that we discussed in chapter 3—the lack of a mechanism and resources for modifying completed technologies for use at specific sites. In fact, none of the initiatives, action plans, or meetings of the Technology Acceleration Committee even raise this issue. Officials at three of the five sites we visited told us that OST sometimes considers its technology development work completed before technologies are ready for specific applications in the field. The Acting Deputy Assistant Secretary for OST agreed that this is a problem and told us that, while the Tanks Focus Area develops technologies fully to the point of use, technologies from the other focus areas were not always ready for field use. For example, he stated that the Mixed Waste Focus Area had not tested its thermal treatment technologies on actual radioactive waste. The Acting Deputy Assistant Secretary stated that sites and focus areas should work together to enable and jointly fund the first use of an OST-developed technology. While joint OST and site support for deployment has occurred for some projects—including the reactor safe-storage project at Hanford and the use of the Houdini robot in tanks at Oak Ridge that are described in chapter 3—EM lacks an overall policy, procedure, and designation of responsibilities for situations in which OST-developed technologies may require modification for site use. Nor has EM identified resources for this purpose, except to the extent that some projects under the Accelerated Site Technology Deployment program may address this need. According to the Director of OST’s Office of Technology Systems, focus areas consider any funding needs for technology modifications when requested by sites. He noted that such requests would compete for limited funding with the focus areas’ technology projects. EM has data that could be used to identify OST technologies that might have additional cost-effective deployments. Sites’ Accelerating Cleanup plans, issued in draft in June 1997 and most recently submitted in June 1998, provide a comprehensive compilation of sites’ technology needs, as well as detailed information on each cleanup project across the DOE complex. OST has developed a database, called a linkage table, that identifies links between its completed and ongoing projects and the sites’ technology needs. EM could identify OST-developed technologies that could provide cost-effective solutions to sites’ needs and set priorities for deployment assistance to cleanup projects, including technical assistance and technology modifications, if needed. OST’s Deactivation and Decommissioning Focus Area has already used this database to contact potential technology users at the sites and inquire whether the focus area can provide assistance. However, OST has not required its focus areas to do this. One potential vehicle for deployment assistance is OST’s Accelerated Site Technology Deployment (ASTD) program, begun in fiscal year 1998. OST funded 14 ASTD projects at 12 sites to deploy innovative technologies in cleanup projects. The approximately $26 million that OST provided to site projects in fiscal year 1998 resulted in an additional investment of about $708 million from the sites over the life of the projects. OST identified potential ASTD projects through site proposals and competitively evaluated the proposals to select projects to fund. Selection criteria included the technical merit of the approach, interest in deploying the technologies at multiple locations, and commitment of additional funding by the site. While ASTD may be helping these selected projects in addressing obstacles to deployment, the program has not fostered interaction among technology developers and users in many instances. For example, we found that OST’s focus areas provided technical assistance to only 5 of the 14 ASTD projects, and national laboratory personnel who had helped to develop some of the technologies provided technical assistance to 2 additional ASTD projects. It should also be noted that technical assistance and technology modifications on a smaller scale than the current ASTD projects may be appropriate in some cases. Having spent more than $2 billion and 9 years on over 700 innovative cleanup technology projects, EM and OST recognize that the cleanup program can only benefit from these efforts if the innovative technologies that have been developed are successfully deployed. To promote deployment, EM and OST have initiated a number of actions aimed at improving the relationship between technology developers in OST and the users at EM’s cleanup sites. However, we are concerned that the committees and processes that EM and OST are now creating will be ineffective if they are not accompanied by more fundamental changes in how EM conducts technology development and deployment. We believe that EM and OST need to take three relatively straightforward actions to increase the deployment of existing innovative technologies First, OST must make sure that it has adequate technical expertise to assist users in evaluating and implementing innovative technologies that it and others have developed. The focus areas are the logical source for this expertise; however, if they are unable to meet this need, other centers of expertise, possibly in the national laboratories, need to be developed. Second, we continue to believe that OST staff, equipped with the appropriate expertise, need to be formally involved in evaluating and selecting technologies for use at the cleanup sites. We believe that the program’s experience has shown that without a specific requirement to bridge the gap between developers and users, each party will continue to operate in its own environment, with users deploying only those technologies with which they are familiar, and OST developing technologies that are generic and not designed for specific situations. Third, existing innovative technologies could be implemented, as we found repeatedly, if they could be modified or fine-tuned to address a specific site cleanup problem. Information now exists from sites’ Accelerating Cleanup plans and OST’s linkage tables to identify technologies that can be modified to fit specific situations. However, such modification takes money, and without specific action by EM management, neither users nor developers are likely to provide these funds on their own. For example, if OST uses its funds to fine-tune an existing technology, it is reducing the funds available for its other missions. Similarly, users can logically view the use of their funds to modify a technology as taking away resources that they need for other cleanups. However, EM’s experience, for example, from the project for safe storage of the C reactor at the Hanford site or from the ASTD program, has shown that successful deployment can occur if both parties make a financial commitment. Additional technology development will be needed to address technology problems for which no cost-effective solution exists, such as high-level waste tanks at Hanford. To ensure the deployment of technologies that are currently under development or will be developed, EM does not need additional processes and procedures. Rather, it needs to rigorously and consistently apply its current gates system. Consistent use of this system by focus areas would help ensure that technology developers and users communicate and cooperate throughout the development of individual technologies, and that, if technologies are not living up to their potential or there is not adequate commitment from users, the project can be terminated and the funds redirected to more productive uses. Ensuring that these actions are taken consistently will require the commitment of top management in the EM program. The Technology Acceleration Committee is a sound idea; however, it has already missed a planned meeting, and we are concerned that it could easily slip into disuse. We believe that continuing a committee of senior EM managers is a key element in ensuring that top management is focused on formulating policy for technology deployment. An additional important element is the establishment of performance measures that hold EM’s top managers accountable for technology deployment. While EM has made clear to field managers that they are responsible for deploying innovative technologies, this commitment needs to be reflected throughout the organization if additional innovative technologies are to be successfully deployed. Finally, with an increased emphasis on deployment, EM will need more accurate data than it currently has on deployment efforts. A verification effort similar to the one we undertook will be needed to provide valid data on future deployments. On the other hand, we recognize that improving data on prior deployments may not be cost-effective. Therefore, reporting existing data as estimates could lend more credibility to the data and the overall program. In addition, EM has recognized that deployment is not the only relevant measure of success in technology development. EM’s recent efforts to develop additional performance measures for the entire program are a step in the right direction. To increase the deployment of existing technologies and ensure that technologies developed in the future are used, we recommend that the Secretary of Energy direct the Assistant Secretary for Environmental Management to direct the Deputy Assistant Secretary for the Office of Science and Technology to establish centers of expertise for innovative technologies by using existing focus areas or another approach if needed and require that a representative from one of these centers participate in the technology selection process on each cleanup project; direct the cleanup programs and OST to (1) use existing data to identify OST-developed technologies that can be cost-effectively modified to meet sites’ needs and (2) identify funds to modify these technologies if needed; direct that the gates system be used rigorously and consistently as a decision-making tool for managing technology development projects and as a vehicle for increasing developer-user cooperation; use their annual performance expectations to hold EM headquarters managers responsible for increasing the deployment of innovative technology; and implement a system to verify the accuracy of future deployment data and label any existing data that have not been verified as an estimate. Overall, DOE agreed with the recommendations in our report. In doing so, DOE offered information regarding actions it had taken or intended to take that it believed were responsive to our recommendations. However, DOE’s responses to two of the recommendations suggest that the actions described would not be fully responsive to these recommendations. DOE’s comments are included as appendix III. In response to our recommendation that OST establish centers of expertise and include a representative from one of these centers in the technology selection process, DOE indicated a willingness to act on our recommendation but offered few specifics, especially with respect to involving OST in the technology selection process. In 1994, we also recommended that OST be given a formal role in the technology selection process. During our current review, we found that this recommendation had not been implemented primarily because site officials were skeptical about OST’s ability to provide quality technical advice and were therefore reluctant to allow OST more of a role in selecting cleanup technologies. We believe that it will take more specific actions by OST, beyond the generalized user steering committees cited in its response, to develop credible expertise and thus gain a role in the technology selection process. In response to our recommendation that DOE rigorously and consistently use the gates system as a decision-making tool for managing technology development, DOE also agreed with the recommendation but noted that it had incorporated the gates system into its system of peer review. While we recognize the value of peer review as a mechanism for obtaining independent technical judgments about projects OST is pursuing, we note that peer review can occur infrequently over the life of a project and after significant decisions are made. Therefore, we do not believe that peer review is a substitute for focus area managers using a disciplined decision-making system that involves users throughout the technology development and deployment process.
Pursuant to a congressional request, GAO reviewed the Department of Energy's (DOE) Office of Environmental Management's (EM) efforts to deploy innovative cleanup technologies, focusing on: (1) the extent to which innovative technologies developed by the Office of Science and Technology (OST) have been deployed at DOE sites and how this rate of deployment compares with the rates of other government organizations that develop environmental technologies; (2) what progress EM has made in overcoming obstacles to deploying innovative technologies at DOE cleanup sites; and (3) what EM is doing to increase the deployment of innovative technologies. GAO noted that: (1) OST has initiated 713 technology development projects and has reported that 152 projects have been deployed one or more times, for an overall deployment rate of 21 percent; (2) GAO found many errors in the office's deployment data and estimates that EM has deployed between 88 and 130 of these projects, for an overall deployment rate of 12 to 18 percent; (3) OST overstated its deployment information because it had not previously maintained comprehensive deployment data; compiled the data rapidly in response to congressional requests; and lacked procedures for compiling the data; (4) in comparison with the deployment rates of other programs that demonstrate environmental technologies--the Environmental Protection Agency's Superfund Innovative Technology Evaluation Program and the Department of Defense's Environmental Security Technology Certification Program--OST's deployment rate for projects at comparable stages of development falls between the rates of these two programs; (5) however, comparisons of OST's deployment rate with the rates of other organizations' programs must be viewed with caution because no organization is fully comparable with OST, and the deployment rate is not the only possible measure of success for research and development programs; (6) as DOE's Environmental Management program has matured, its waste cleanup sites have made progress in overcoming some obstacles to implementing innovative technologies; (7) other obstacles that are internal to the operations of EM and its OST continue to slow the use of innovative technologies, including the lack of: (a) involvement by technology users in the development of cleanup technologies by OST; and (b) technical assistance by OST to help sites select and implement technologies; (8) after congressional hearings in May 1997, EM initiated changes in its organization and processes to increase the deployment of innovative technologies; (9) some of these initiatives address the internal obstacles limiting deployment; (10) however, the office has not yet improved developer-user cooperation in individual technology development projects; (11) OST does not consistently and rigorously use its existing decisionmaking process for managing the development of innovative technologies; and (12) EM has yet to determine how it will: (a) provide technical assistance to sites in selecting and implementing innovative technologies; and (b) make modifications to completed technologies to meet sites' specific needs and conditions.
The Air Force depot maintenance activity group supports combat readiness by providing depot repair services necessary to keep Air Force units operating worldwide. The group generates between $5 billion to $6 billion in annual revenue principally by repairing and overhauling a wide range of assets including aircraft, missiles, aircraft engines, software, and exchangeable inventory items for military services, other government agencies, and foreign governments. In performing these services, the group performs the work in-house at its depots or through contracts with private industry or other government agencies. The group operates under the working capital fund concept, where customers are to be charged the anticipated actual costs of providing goods and services to them. Customers place orders with the Air Force depot maintenance activity group. When the activity group accepts the order, the customer’s funds are obligated. The customer uses the activity group as its purchasing agent when it needs a contractor to perform depot-level maintenance work. The activity group awards the contract and manages the work performed by the contractor. The contract portion of the depot maintenance activity group generates between $2 billion and $3 billion in annual revenue. In accomplishing this work, the Air Force has about 5,000 contracts with about 750 contractors that are located in the United States as well as overseas. The Air Force air logistics centers use the contract depot maintenance production and cost system (known as G072D) as a means of combining financial and production data for the management of work that is being performed by contractors. The Air Force has also established procedures and internal controls for the contract portion of the depot maintenance activity group, which are described in two Air Force Materiel Command Instructions. Command Instruction 21-113 discusses the contract maintenance program for the depot maintenance activity group and Command Instruction 21-134 discusses the end item transaction reporting system (known as G009) and the reporting procedures for contractors. Some of the procedures and controls in these two instructions follow. Contracts can be awarded for a 12-month period anytime during the year. All the items to be repaired will be funded from the appropriation of the initial fiscal year. However, work must be started on at least one item during the initial fiscal year for the entire job cost to be properly charged to appropriated funds for that year. At a minimum, assets planned to be sent to contractors for repair should be reviewed quarterly. If the assets are not received by the contractors and will not be received within a reasonable amount of time (60 days), the planned quantities to be repaired and related obligated dollars must be reduced accordingly and the contract amended, if necessary. Contractors are required to report, at least monthly, on the status of the assets being repaired, such as when the (1) assets were received, (2) assets were inducted for repair, and (3) work was completed on the assets. The production management specialists at the air logistics centers are responsible for ensuring that the information provided by the contractors is accurate. A review of the contract maintenance ledger produced from the production and cost system must be performed quarterly. Particular attention should be directed to (1) contractors beginning work on assets compared to the plan and (2) contractors completing work on assets compared to the plan. Any questionable information must be annotated and reviewed and corrections made prior to the next monthly processing cycle. Carryover is the dollar value of work that has been ordered and funded (obligated) by customers but not yet completed by working capital fund activities at the end of the fiscal year. Carryover consists of both the unfinished portion of work started but not yet completed, as well as requested work that has not yet commenced. To manage carryover, DOD converts the dollar amount of carryover to months of work. This is done to put the magnitude of the carryover in proper perspective. For example, if an activity group performs $100 million of work in a year and had $100 million in carryover at year-end, it would have 12 months of carryover. However, if another activity group performs $400 million of work in a year and had $100 million in carryover at year-end, this group would have 3 months of carryover. A DOD regulation allows for some carryover at fiscal year-end to allow working capital funds to operate efficiently and effectively. In 1996, DOD established a 3-month carryover standard for all the working capital fund activities except for the contract portion of the Air Force depot maintenance activity group. The Air Force is the only military service that includes its contract depot maintenance operation in its working capital fund. To reflect this difference, DOD established a 4.5-month carryover standard to account for the additional administrative functions associated with awarding contracts. In May 2001, we reported that DOD did not have a basis for its carryover standard and recommended that Defense determine the appropriate carryover standard for the depot maintenance, ordnance, and research and development activity groups. DOD is in the process of assessing its carryover standards. Too little carryover could result in some depot maintenance activity not having work to perform at the beginning of the fiscal year, resulting in the inefficient use of personnel. On the other hand, too much carryover could result in an activity group receiving funds from customers in one fiscal year but not performing the work until well into the next fiscal year or subsequent years. By minimizing the amount of the carryover, DOD can use its resources most effectively and minimize the “banking” of funds for work and programs to be performed in subsequent years. In February 2002, the Air Force began to consider financing the contract portion of the depot maintenance activity group with direct appropriations. In an April 19, 2002, memorandum, the Air Force stated that the overall financial health of the depot maintenance activity group has been negatively impacted by the contract operations. Further, without direct control over contractor costs, the working capital fund mechanism is an inappropriate choice for the contract operations. The memorandum directed the Air Force Materiel Command to begin planning for the transition of contract depot maintenance operations out of the working capital fund immediately. This would be a significant change in the financing and accounting for these contracts. Under the plan, contracts would be financed with direct appropriations, which is how the Army and Navy finance contract depot maintenance work, and carryover would no longer be associated with the work being performed by the contractor. Instead, funds would be managed in terms of the percent of funds obligated and expensed during a fiscal year. Further, the Air Force plans on using existing direct appropriation fund systems to track repairs and account for the funds and would not use its current working capital fund systems. The lack of accurate carryover information results in the Congress and DOD officials not having the information they need to oversee and manage the repair of assets. Air Force reports show that the contract portion of the depot maintenance activity group exceeded the 4.5-month carryover standard at the end of fiscal years 2000 and 2001 by about $44 million and $134 million, respectively, thereby resulting in more funds being provided than allowed by the DOD carryover standard. However, we found that the reported carryover balance did not accurately reflect the amount of unfinished work on hand at the end of fiscal year 2000 due to (1) faulty assumptions used in calculating work-in-process and (2) records not accurately reflecting work that was actually completed by year-end. As a result, the amount of carryover reported by the Air Force was understated by tens of millions of dollars. Air Force reports show that the contract portion of the depot maintenance activity group exceeded the 4.5-month carryover standard at the end of fiscal years 2000 and 2001. The Air Force reported that it had about $835 million, which is 4.7 months, of carryover at the end of fiscal year 2000, and about $1.1 billion, which is 5.1 months, at the end of fiscal year 2001. In the past, the Office of the Under Secretary of Defense (Comptroller) and/or the congressional defense committees used carryover information to determine whether the working capital fund activity groups had too much carryover. For example, the Congress reduced the Army’s and Air Force’s fiscal year 2001 Operation and Maintenance appropriations by $40.5 million and $52.2 million, respectively, because the depot maintenance operations in their working capital funds had too much carryover. Similarly, in 2001, the Under Secretary of Defense (Comptroller) reduced the Air Force’s fiscal year 2003 customers’ budget requests by $185 million because the contract portion of the depot maintenance activity group would have too much carryover at the end of fiscal year 2003. As stated previously, carryover is the amount of unfilled orders less the amount of work-in-process. We found that the Air Force does not have actual information on the amount of work-in-process performed by contractors and, therefore, uses a formula to estimate the amount based on the assumption that the contractor will start and complete work as planned. However, the assumptions were faulty because the contractors did not always start and/or complete the work as planned. Using its formula, the Air Force reduced the amount of unfilled orders due to work- in-process by about $1 billion, which is 5.6 months, and $835 million, which is 4.1 months, in fiscal year 2000 and fiscal year 2001, respectively, to determine the amount of carryover for these 2 years. The amount of work-in-process recorded monthly is affected or is determined by the nature of the work, the estimated/actual start date of the work, and the expected time to complete the work. For work that is planned to be completed in less than 150 days, the Air Force assumes that one-fifth of the work will be completed each month and records work-in- process accordingly. The calculation for the different workload categories is outlined below. For workload categories involving exchangeable inventory items, other major end items, and software, the amount of work-in-process is based on when the work is planned to begin and assumes that the work will be completed within 5 months. Thus, the contractor does not have to begin actual work, and the items to be repaired do not even have to be at the contractors’ plant in order to record work-in-process on those specific orders. For workload categories involving aircraft, engines, and missiles, the amount of work-in-process is based on when the work actually begins at the contractor’s plant and assumes that the work will be completed within 5 months from that point in time. For work planned to be completed in more than 150 days, the Air Force has a different calculation to determine the amount of work-in-process. The calculation for the different workload categories is outlined below. For workload categories involving exchangeable inventory items, other major end items, and software, the amount of work-in-process is based on when the work is planned to begin and assumes that the work will be completed in the estimated number of days as planned. For example, if the Air Force estimates that the work will be completed in 1 year, it will record one-twelfth of the amount of the order as work-in-process each month. The contractor does not have to begin actual work in order to start recording work-in-process. For workload categories involving aircraft, engines, and missiles, the amount of work-in-process is based on when the work actually begins at the contractor’s plant and assumes that the work will be completed in the estimated number of days as planned. For fiscal years 2000 and 2001, the amount of reported work-in-process had a significant impact on the amount of carryover, reducing each fiscal year’s carryover by at least $835 million. Table 1 shows the actual reported year- end unfilled orders, work-in process, and carryover, in dollars and months for fiscal year 2000 and fiscal year 2001. It also shows the amount of carryover in excess of the 4.5-month standard. According to Air Force Materiel Command officials, the primary reason that they exceeded the 4.5-month standard for fiscal year 2001 was the receipt of a large amount of orders late in the fiscal year. Specifically, actual customer orders exceeded planned customer orders by $311 million for fiscal year 2001, with $292 million of that amount received in August 2001. Large quantities of orders placed late in the fiscal year provide the Air Force limited opportunity to perform the work by the end of the fiscal year. Air Force officials also stated that the current systems used by contract depot maintenance cannot produce a reliable work-in-process amount. They further stated that the assumptions used for calculating work-in- process do not provide an accurate work-in-process amount, particularly the assumption that the work will begin as planned. Air Force officials told us, and we agree, that a more accurate way to calculate work-in-process would be to eliminate the assumption that the contractor will start work as planned and base all work-in-process calculations on when the contractor actually starts work. The officials said making such a change to the calculation would provide a financial incentive for contract depot maintenance to ensure that data on when the work actually started is entered into the system in a timely manner. The incentive to do so stems from the fact that contract depot maintenance bills customers based on the work-in-process amount that is recorded in the production and cost system. If work-in-process is based on when the contractor actually starts work, the depot maintenance activity group cannot bill customers until the date that the work actually started is recorded in the system. As previously discussed, because the air logistics centers use the production and cost system to manage the work performed by contractors, it is critical that the unfilled order data be entered into the system in an accurate and timely manner. The data in this system are also used in the Air Force’s budget process and are the basis for determining the amount of carryover, which is reported to the Congress each fiscal year. However, we found that much of the unfilled order data in the system was inaccurate or incomplete because the production management specialists, who are primarily responsible for data accuracy, did not always (1) ensure that contractor production data in the system were correct or (2) enter contract information for new customer orders into the production and cost system in a timely manner, as the following two examples illustrate. Based on our analysis of a stratified random sample of unfilled maintenance requirements at the end of fiscal year 2000, we estimate that $256 million of the work was actually completed but not reflected in the system because the production management specialists did not ensure that the data were correct. When contract depot maintenance receives a customer order for work, it enters into a contractual relationship for the performance of the work and then records information on the contract in the production and cost system. Any customer order for which there is no contractual information in the system is referred to as “unscheduled” work. We found that as of September 30, 2000, contract depot maintenance had at least $59.9 million of unscheduled work for which the contracts were awarded but the contract information was not recorded in the system. Our analysis of the $59.9 million showed that $8.6 million was not entered into the system for at least 3 months to 5 months after the contracts were awarded, while about $15 million was not entered for at least 6 months or longer. For example, in one case, an order for $3.6 million was not entered in the system for 7 months after the contract was awarded. In another case, an order for $802,000 was not entered into the system for 20 months after the contract was awarded. In both cases, the lack of production management specialist oversight due to either heavy workload or inexperience was cited as the reason for not entering the data in a timely manner. Without the contract data in the system, there was no information in the system for managing repair actions and monitoring the status of the contracts. We found that (1) in some cases, the production management specialists were not following the regulations regarding data accuracy, (2) in other cases, the production management specialists did not know the correct treatment for recording data accurately, and (3) standard operating procedures for use by the production management specialists did not exist to provide detailed instructions on their responsibilities for data accuracy. Air logistics center officials also told us that production management specialists need training that is specific to their day-to-day responsibilities and that such training would enhance the production management specialists’ awareness of the importance of data accuracy. Air Force Materiel Command officials stated there is a data discipline problem that centered around the production management specialists not ensuring that the data in the production and cost system are correct and up to date. The officials attributed this problem, in part, to the lack of clear guidance and detailed operating procedures related to how the production management specialists should go about performing their day-to-day responsibilities. The officials further told us that there is a lack of internal controls or processes to ensure data accuracy, such as the use of metrics that could act as “red flags” to alert management to possible data problems. In discussing the data accuracy problem with Air Force headquarter officials, they told us that a contributing factor was the disruption to operations when the Air Force hired about 150 new production management specialists in fiscal year 2000 because of the closing of two air logistics centers and transferring the oversight of their contracts to the remaining centers. Since 1996, the Air Force has recognized the need to improve the reliability of the data in the production and cost system and, until February 2002, was developing a new system, the Contract Maintenance Accounting and Production System—known as G501—to accomplish this. According to Air Force officials, implementing the new system would have helped alleviate the type of data problems we found because it was to be a single, fully integrated real-time web-based system, which, among other things, would have streamlined contractor reporting of production data. The Air Force had planned to implement the new system at the three air logistics centers and at approximately 900 contractor facilities. The development of the new system initially started in 1996 as an effort to redesign the existing production and cost system. It was later decided that this system and two other legacy systems that currently perform production and accounting functions for the contract portion of the depot maintenance activity group needed to be replaced since they interacted with each other. Thus, in September 1999, a contract was awarded for the development project with an estimated completion date of December 2001, which was later revised to fiscal year 2005. In February 2002, the Air Force began to consider stopping its financing of the contract portion of the depot maintenance activity group in the working capital fund. As a result, the Air Force has stopped working on developing the new system after spending about $7.8 million. The Air Force plans to use other systems to perform the production and accounting functions. Our analysis of about $1.6 billion of reported unfilled orders showed that a substantial amount of the work that the activity group carried over into fiscal year 2001 was work that it had planned to complete prior to the end of fiscal year 2000 but did not due to logistical and production problems. Specifically, we estimated that about $530 million of work was not completed for two key reasons. First, repairs took longer than planned primarily because (1) parts needed to perform the repairs were not available from DOD, (2) more work was needed to repair the assets than originally planned by the Air Force, and (3) contractors had capacity constraints related to personnel, facilities, and equipment. Second, work on some assets was not started as planned because of the delayed induction of items into production at contractors’ facilities. Further, we could not determine the causes for an estimated $191 million of work not being done primarily because Air Force officials could not provide reliable information on the status of contracts that were previously managed by the two air logistics centers that were closed in fiscal year 2001. In addition, we estimated that about $657 million of the unfilled orders that the activity group carried over into fiscal year 2001 was for work that was planned to be completed in fiscal year 2001. Since this work was expected to be carried over, we classified it as normal carryover. The results of our analysis are summarized in table 2 and discussed in greater detail in the following sections. As shown in table 2, we estimated that about $322 million of the activity group’s unfilled orders as of September 30, 2000, were for work that was scheduled to be completed prior to September 30, but was not completed by then because of longer than expected repair times. As the following examples illustrate, our work showed that the primary causes of these longer than expected repair times were (1) shortages of component parts, (2) unanticipated problems, and (3) contractor capacity constraints. For the exchangeable and engine workload categories, a shortage of component parts was a major cause of untimely repairs. For example, a requirement for the repair of a leading-edge aircraft part was placed on a contract in December 1999 for a unit sales price of $39,268 and with an expected completion date of June 2000. In January 2000, the contractor inspected the item and determined that defective seals would have to be replaced. Because the seals were government-furnished material, the contractor submitted a requisition to the Defense Logistics Agency. When the seals had not arrived by the expected delivery date (May 2000), the contractor requisitioned them again and, when the Defense Logistics Agency subsequently advised the contractor that the seals were not available, the contractor requested and was granted permission to manufacture them. As of November 2001, the projected completion date for the manufacture of the seals was March 2002, and the leading-edge aircraft part was expected to be repaired and available for shipment to the customer almost immediately after that—about 21 months longer than expected. The Air Force Materiel Command recently completed a study of this long- standing and well-documented problem that the Air Force refers to as “awaiting parts.” Additionally, it developed an action plan to correct some of the underlying causes of the awaiting parts problem that were identified in the study. The scope of both the study and the action plan was limited to depot maintenance work that is performed in-house at the three air logistics centers and did not cover the contract portion of the activity group. Air Force Materiel Command officials have acknowledged that contract depot maintenance has unique awaiting parts problems because the contract portion of this activity group uses different systems than the in- house portion of the group. They indicated that the plan to remove contract depot maintenance operations from the Air Force Working Capital Fund and to discontinue, as previously discussed, the development of the new production and cost system have caused them to put virtually all contract depot maintenance initiatives on hold. Unanticipated problems were another major cause of repairs not being performed as planned, especially for the aircraft workload category. For example, for the last several years, the contract repair of KC-135 aircraft, which have an average age of about 40 years, has been a large and problematic workload. Specifically, due primarily to unanticipated problems, such as the need for major structural repairs, work on these aircraft has taken much longer than expected to complete. According to data in the production and cost system, as of September 30, 2000, a contractor had not completed work on 11 KC-135 aircraft that were originally scheduled to be completed during fiscal year 2000 and two aircraft that were originally scheduled to be completed during fiscal years 1997 and 1999, respectively. Additionally, as of September 30, 2000, another contractor had not completed work on 16 KC-135 aircraft that were originally scheduled to be completed during fiscal years 1999 and 2000. The magnitude of this problem is illustrated in table 3, which compares the initial and actual repair times for KC-135 aircraft at the second contractor’s facilities during fiscal years 1999 and 2000. Altogether, the 29 KC-135 aircraft that were scheduled to be completed prior to the end of fiscal year 2000, but were not, had an unfilled order value of about $86.6 million. Contractor capacity constraints are a third cause of repair problems. For example, one of the items in the sample was a $1.4 million requirement to repair 15 power supply units at a unit sales price of $94,879 and with an estimated repair time of 45 days. All work on this item, which is a component of an electronic warfare system, was scheduled to be completed by June 30, 2000. However, as of September 30, 2000, only eight items had been repaired and, as of September 30, 2001, one item had still not been repaired. The prime contractor attributed the delayed repairs to personnel constraints. Specifically, at one time, there was a steady repair workload for this item, and a subcontractor employed three to four people to work on nothing but this requirement. When the workload declined, the subcontractor released all but one of the employees trained to make the repairs. According to the prime contractor, when an order was received in late 1999, the subcontractor had difficulty finding qualified people to do the work. However, the prime contractor also indicated that the subcontractor has gradually redeveloped a repair capability in this area, is now repairing two items a month, and expects to build up his capability to three a month in the near future. As shown in table 2, we estimated that about $208 million of the activity group’s unfilled orders as of September 30, 2000, were for work that was scheduled to be completed prior to September 30, but was not completed by then because work on the items was not started as planned at contractors’ facilities. A $9.8 million order to repair 24 exchangeable inventory items is an example of a requirement that we placed in the delayed induction category. In this case, the estimated repair time was 90 days, and data in the production and cost system indicated that the contractor was expected to complete work on all 24 items by the end of fiscal year 1999. However, as of the end of fiscal year 2000, the contractor had received only 19 of the inventory items. The remaining five inventory items—which had an unfilled customer order value of about $2 million— were not received by the contractor until the third quarter of fiscal year 2001 and we, therefore, included $2 million in the delayed induction category. One of the underlying causes of the activity group’s induction problem is that the Air Force Materiel Command has not established effective internal control procedures to ensure that production management specialists are complying with its policy guidance. For example, Air Force Materiel Command Instruction 21-113 states that, at a minimum, “review of asset generation should be done on a quarterly basis and, if assets will not generate within a reasonable period of time (60 days), the scheduled input quantities and obligated dollars must be reduced accordingly.” However, our analysis showed and the Air Force Materiel Command agreed that there is no systematic process or effective internal controls to ensure that the production management specialists are complying with this guidance. A second cause is that some of the guidance is inconsistent. For example, Air Force Materiel Command Instruction 21-113, “Contract Maintenance Program for Depot Maintenance Activity Group (DMAG),” states that work must be started on at least one asset during the fiscal year that an order is placed. However, the Air Force Logistics Command supplement to Air Force Regulation 170-8 states that the contract depot maintenance activity group has until December 31 to get a customer’s requirement in a contract (January 31 for some requirements). Accordingly, one regulation requires work to be started on an asset before the fiscal year-end, but another regulation does not even require that the contract be awarded until the end of the calendar year. As shown in table 2, we could not determine the cause of the problem for about $191 million of unfilled orders that were scheduled to be completed prior to September 30, 2000, but were not. We could not make this determination because production management specialists did not have documentation on the status of the repairs needed to make this determination. The two primary reasons that information was not available were that production management specialists (1) did not have required documentation for many of the contracts that were transferred from the Sacramento and San Antonio Air Logistics Centers to two of the three remaining centers in October 2000 and (2) did not maintain information on the status of software projects. In October 2000, the Sacramento and San Antonio Air Logistics Centers discontinued their contract depot maintenance operations and transferred management responsibility for their contracts to the three remaining centers (Warner Robins, Oklahoma City, and Ogden). As part of this management transfer, the Sacramento and San Antonio centers shipped contract files and related customer order files to the three centers that assumed responsibility for the work. However, in many instances, two of the centers that assumed responsibility for the work either did not receive the required files or received incomplete files. Additionally, for the files they did receive, they found numerous and significant discrepancies between the information in the contract files and related customer files. Discrepancies were also found between these manual records and the data in the production and cost system. As a result of these problems, two of the three remaining centers have had to reconstruct many of the files and reconcile numerous discrepancies. Because the Air Force does not know the status of these contracts, (1) it is potentially vulnerable to paying for goods and services not received or performed and is subject to fraud, waste, and abuse, (2) work may have been accomplished but not recorded in the system, or (3) the Air Force may not be taking prompt and appropriate action to resolve problems that are delaying the completion of the work. A contract depot maintenance manager at one of the remaining centers characterized this records reconstruction and reconciliation effort as “overwhelming.” Specifically, he noted that his center had assumed responsibility for 627 contracts (about 20 percent of its total workload), and pointed out that the contracts went back to 1981 and each contract could have as many as 800 line items. The manager also stated that, as of October 2001 (1 year after the transfer), center staff had not even looked at many of the contracts and had unreconciled problems with many of the contracts that they had reviewed. To further illustrate the magnitude of the reconciliation problem, he pointed out that their research thus far had determined that (1) they did not have contracts for $41 million of work that was recorded in the production and cost system and about $3 million in contractor payments, (2) the production and cost system contained no information on 74 contracts with a total contract amount of $12.6 million, and (3) contractors were providing automated production data for less than 15 percent of the transferred contracts. The lack of documentation for contracts that were transferred from a closing facility had resulted in the lack of management oversight. For example, 10 of the sample items, with a total unfilled order value of $4.5 million as of September 30, 2000, were requirements for work on hush houses that were contracted for by one of the closing air logistics centers. Work on several of these hush houses was supposed to be completed prior to the end of fiscal year 2000, and work on all of them was supposed to be completed prior to the end of fiscal year 2001. However, as of December 2001, the air logistics center that assumed management responsibility for the contracts was still trying to determine the status of the work since it had not received the required documentation from the closing center. The two centers that have a significant problem in this area have recently dedicated several personnel to the resolution of problems related to the transferred contracts. However, this work is a time-consuming and labor- intensive process, and the Air Force Materiel Command has not established either a milestone for completing the work or a methodology for monitoring progress. Consequently, it is uncertain when the centers will have all of the information they need to manage these transferred contracts. We estimate that $68 million of the unfilled orders in the software workload category of the sample was for work that was scheduled to be completed prior to September 30, 2000, but was not completed as of that date. However, we were unable to determine why the completion of this work was delayed because production management specialists are not required to monitor the status of software projects and did not have the documentation needed to identify problems and determine their underlying causes. For most nonsoftware workloads, the requirement is to repair a specific quantity of items within a specified period of time, and contractors are required to submit automated production reports that show when they (1) receive the items that are to be repaired, (2) started work on the items, (3) complete the repairs, and (4) ship repaired items to customers. Additionally, production management specialists are required to develop schedules that show when work on the items is scheduled to start and when repairs are expected to be completed. If done properly, this approach ensures that production management specialists have the information they need to (1) monitor the status of the work, (2) identify problems, and (3) take prompt corrective action, when appropriate. However, for software workloads, the requirement is not to repair a certain number of items, but rather to accomplish certain tasks. For example, the task could be to (1) attend and support any lab, ground, and flight tests performed on a weapon system, (2) analyze test data, or (3) revise weapon system software that does not perform as intended, such as an electronic warfare system that does not perform effectively in high electro-static environments. As a result, most software workloads are expressed as a level of effort, such as in number of hours or months to be worked. Because production management specialists do not have reliable data on the status of software projects, the “actual” production data that they enter into the production and cost system are estimates that are based on the frequently erroneous assumption that work will begin and be accomplished as planned. This problem, which is similar to the previously discussed problem with the activity group’s work-in-process data, makes the reported value of software unfilled orders highly questionable. Further, because the reported value of software carryover is based on highly questionable estimates for both work-in-process and undelivered orders, it cannot be relied on. The Air Force does not have reliable information on the dollar amount of carryover for its contract depot maintenance operation due to faulty assumptions used in calculating work-in-process and records not accurately reflecting work done at year-end. Until the problems are corrected, congressional and Defense decisionmakers will be forced to make key budget decisions, such as whether or not to enhance or reduce customer budgets, based on unreliable information. In addition, due to logistical and production problems, hundreds of millions of dollars of work was not done as planned and was carried over into the next fiscal year. These problems resulted in idle funds that could have been used for near- term readiness or other priorities. For the contract portion of this activity group to operate more effectively, managers at the Air Force Materiel Command and the air logistics centers must be held accountable for (1) the accuracy and timeliness of the production and financial management information used for decision-making and (2) ensuring that the work is completed as planned. Until these weaknesses are resolved, concerns will continue to be raised about the amount of carryover related to the contract portion of this activity group. We recommend that the Secretary of the Air Force direct the Commander, Air Force Materiel Command, to do the following. Use the date contractors actually start work, rather than the planned start date, to calculate work-in-process for all workload categories as long as the contract portion of the activity group remains in the working capital fund. Improve the accuracy of the data in its systems that track repair actions and account for costs by (1) holding managers accountable for ensuring the accuracy of the data, (2) developing standard operating procedures that provide detailed guidance on production management specialists’ day-to-day responsibilities, particularly in the area of ensuring data accuracy, (3) providing additional training to production management specialists on these procedures, and (4) developing metrics that act as “red flags” to alert management of possible data problems. At a minimum, the systems should provide timely and accurate information on when contractors receive broken items for repair, repair work starts on the items, repairs are completed, and repaired items are returned to customers. Develop accurate and complete information on contracts that were awarded by the San Antonio and Sacramento Air Logistics Centers and subsequently transferred to the three remaining centers to avoid loss of control that may result in fraud, waste, and abuse. This will require, at a minimum, (1) establishing milestones for completing both the review of transferred contracts and resolving data problems identified during the reviews, (2) using metrics to monitor progress, and (3) ensuring that sufficient resources are dedicated to the resolution of the problem. Identify the underlying causes of the contract depot maintenance “awaiting parts” problem. Develop an action plan to address the underlying causes for the “awaiting parts” problems similar to the plan that was recently developed to address the “awaiting parts” problems for the air logistics centers’ in-house depot maintenance operations. Provide clear and consistent guidance on how, when, and by whom the induction of assets should be monitored. Establish internal control procedures to ensure that the guidance on the induction of assets is followed. DOD provided written comments on a draft of this report. DOD concurred with our seven recommendations and identified actions it was taking to correct the identified deficiencies. For example, to improve the accuracy of the data in its systems that track repair actions and account for costs, the Air Force is in the process of developing and providing training courses to the production management specialists. DOD’s comments are reprinted in appendix III. We are sending copies of this report to the Secretary of Defense; the Secretary of the Air Force; the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the Subcommittee on Defense, Senate Committee on Appropriations; the House Committee on Armed Services; the Subcommittee on Military Readiness, House Committee on Armed Services; the Ranking Minority Member, Subcommittee on Defense, House Committee on Appropriations; and other interested parties. Copies will be made available to others upon request. Please contact Greg Pugnetti, Assistant Director, at (703) 695-6922 if you or your staff have any questions concerning this report. Other key contributors to this report are listed in appendix IV. To determine if the reported carryover balances accurately reflected the amount of unfinished work on hand at the end of fiscal year 2000, we obtained and analyzed the air logistics centers’ logistical and financial reports that provided information on unfilled orders. We also reviewed the computation that the Air Force uses to determine the dollar amount and number of months of carryover. This computation is the dollar amount of unfilled orders at fiscal year-end less the dollar amount of work-in-process which equates to the amount of funds that carryover to the next fiscal year. We reviewed the two factors (unfilled orders and work-in-process) that are used in the computation and obtained documentation that supported the information. Since the work-in-process amount is a calculated figure and is not based on actual work performed by the contractors, we obtained and analyzed the methodology used by the Air Force Materiel Command and the air logistics centers to compute the amount of work-in-process. In addition, to determine if any of the work had actually been completed, we selected and reviewed a stratified random sample of unfilled orders. To identify the primary causes of contract depot maintenance carryover, we reviewed a stratified random sample of 369 contract depot maintenance requirements that, according to the group’s production and cost system, had been funded by customers, but not yet completed by contractors as of September 30, 2000. These 369 depot maintenance requirements were selected from five major workload categories (aircraft, engines, exchangeable inventory items, missiles and other major end items, and software). They accounted for $744.1 million, about 41.2 percent, of the $1.806 billion of unfilled orders that the activity group reported at the end of fiscal year 2000. Of the $1.806 billion, about $124 million of the September 30, 2000, unfilled orders were customer requirements that, according to the production and cost system, had not yet been placed on contract. These requirements and an additional $48 million of relatively small orders were excluded from our analysis. The remaining $1.634 billion represents total unfilled orders. The confidence level used for estimating the value of completed and uncompleted work was 95 percent and the expected tolerable amount in error (test materiality) was $163,392,642. See appendix II for the Sample Element Disposition Table. Table 4 discloses the estimates and confidence intervals in total and individually for normal carryover, total problem carryover, and each of the carryover problems for the carryover balances as of September 30, 2000. We obtained information on the contractor performing the work, financial data, and production data for each item in the sample from the production and cost system. This information follows: (1) contract number, (2) contract line item number, (3) end item identity, (4) fiscal year of order financing the work, (5) production management specialist office responsible for overseeing the work, (6) unit sales price for the work, (7) quantities of items planned to be repaired and when, (8) quantities of items repaired as of September 30, 2000, and September 30, 2001, and (9) dollar amount of unfilled orders as of September 30, 2000. We analyzed the above information to determine if the work was accomplished in fiscal year 2000 as planned and, if not, we obtained explanations from the air logistics centers about why the work was not completed. We performed our review at the headquarters offices of the Under Secretary of Defense (Comptroller) and the Secretary of the Air Force, Washington, D.C.; Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio; the Oklahoma City Air Logistics Center, Tinker Air Force Base, Oklahoma; the Ogden Air Logistics Center, Hill Air Force Base, Utah; and the Warner Robins Air Logistics Center, Robins Air Force Base, Georgia. Our review was performed from July 2001 through May 2002 in accordance with U. S. generally accepted government auditing standards. The production and financial information referred to in this report was provided by the Air Force. We worked with Air Force officials to validate the reliability of the information in the system to determine the reasons for the work not being completed at the end of fiscal year 2000. We requested comments on a draft of this report from the Secretary of Defense or his designee. DOD provided written comments and these comments are presented in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix III. Staff who made key contributions to this report were Sharon Byrd, Francine DelVecchio, Karl Gustafson, William Hill, Ron Tobias, and Eddie Uyekawa. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily E-mail alert for newly released products” under the GAO Reports heading.
The Floyd D. Spence National Defense Authorization Act for Fiscal Year 2001 requires GAO to review various aspects of the Department of Defense (DOD) policy that allows Defense Working Capital Fund activities to carry over a 3-month level of work from one fiscal year to the next. The DOD 3-month carryover standard applies to all DOD activity groups except for the contract portion of the Air Force depot maintenance activity group, for which DOD established a 4.5-month carryover standard because of the additional administrative functions associated with awarding contracts. Reported carryover balances for fiscal years 2000 and 2001 were inaccurate and, therefore, the balances were not reliable for decision-making or budget review purposes. The reported carryover balances were not accurate due to (1) faulty assumptions used in calculating work-in-process and (2) records not accurately reflecting work that was actually completed by year-end. As a result, the amount of carryover reported by the Air Force was understated by tens of millions of dollars and customers' funds were idle that could have been used for other purposes during the fiscal year. Even though the carryover was understated, Air Force reports show that the contract portion of the depot maintenance activity group exceeded the 4.5-month carryover standard at the end of fiscal year 2000 and fiscal year 2001 by $44 million and $134 million, respectively. Air Force headquarters officials stated the primary reason that they exceeded the standard for fiscal year 2001 was the receipt of a large amount of orders late in the fiscal year.
The federal performance management framework put into place by GPRA and GPRAMA requires agencies to develop long-term strategic plans that identify their missions, along with long-term goals and objectives (often referred to as strategic goals and objectives) aimed at achieving their missions. Agencies are to develop performance plans with near-term goals annually, to show progress towards their long-term goals and objectives. These near-term goals are called performance goals. In both of these plans, agencies are directed to identify the various strategies and resources they will use to achieve their goals. In line with these requirements, the Open Government Directive instructs federal agencies to develop Open Government Plans detailing the strategies and initiatives they would use to improve public engagement and collaboration on the agency’s core mission activities. It directs agencies to describe how they would use innovative feedback mechanisms, technology platforms, and such methods as prize competitions to increase opportunities for public participation and collaboration with those outside the agency and in other levels of government. These outside parties include those in the private, nonprofit, and academic sectors. Agencies were directed to release their initial plans in 2010, and to update these plans every 2 years. In July 2016, OMB released guidance for the development of 2016 Open Government Plans, which were to be published in September 2016. The new guidance instructs agencies to describe their activities to increase the use of open innovation initiatives. In early 2010, OMB also created an Interagency Open Government Working Group to provide a forum for open government professionals to share best practices across agencies. Representatives from 41 federal agencies made up the initial working group. OMB, OSTP, and GSA have taken additional steps to support and encourage agency use of open innovation strategies. They have developed specific policy and guidance documents, built websites that facilitate their use, and supported knowledge sharing communities of practice. For example: In July 2010, GSA launched Challenge.gov. This site is designed to help agencies find participants for prize competitions and challenges by providing a centralized list of all competitions sponsored by federal agencies. After the America COMPETES Reauthorization Act authorized federal agencies to conduct prize competitions, OMB issued guidance in August 2011 to help agencies use this authority. GSA also hosts the Challenges and Prizes Community of Practice. This group meets quarterly to discuss policies and procedures, and share ideas and practices. According to information from Challenge.gov, agencies have conducted more than 700 distinct prize competitions or challenges since the site was first launched in 2010. In May 2013, the President released an executive order requiring OMB to issue an Open Data Policy. This policy, also released by OMB in May 2013, directs agencies to collect or create information using open formats that are non-proprietary and publicly available, and to build or modernize information systems in a way that maximizes the accessibility of information. The President’s executive order also called for the creation of an Open Data Cross- Agency Priority Goal, which is designed, among other things, to provide support to help agencies release high priority data sets and facilitate the use of open data by those outside the agency. In May 2014, the administration also released an Open Data Action Plan. This plan called on agencies to use online and in-person mechanisms to engage with open data users and stakeholders to prioritize open data sets for release, improve data based on feedback, and encourage its use. OMB and OSTP have created a website called Project Open Data to provide good practices and examples to assist agencies. OMB, OSTP, and GSA also manage the Open Data Working Group, which meets every 2 weeks to share best practices and tools, and allow agencies to learn from one another. In September 2015, OSTP released a memorandum that outlined principles agencies should use when designing a crowdsourcing or citizen science initiative. The memorandum also outlined actions the agencies should take to build their respective agency capacity to use that type of strategy. At the same time, OSTP released the Crowdsourcing and Citizen Science Toolkit with practices, lessons learned, and case studies to inform agency efforts to design, implement, and sustain these initiatives. GSA has also launched Citizenscience.gov, which is a centralized repository of information on agency citizen science initiatives. As of September 2016, the Crowdsourcing and Citizen Science Catalog on Citizenscience.gov lists 303 active crowdsourcing and citizen science projects across 25 agencies. Lastly, practitioners from across the federal government have come together to form the Federal Community of Practice for Crowdsourcing and Citizen Science, which meets monthly to share lessons learned and practices for implementing and evaluating crowdsourcing and citizen science initiatives. In some agencies, this government-wide infrastructure has been supplemented by agency-level policies and organizations with dedicated staff and resources. For instance, NASA has created the Center of Excellence for Collaborative Innovation (CoECI), which assists teams from NASA and other agencies with implementing open innovation strategies, particularly prize competitions and challenges. Based on our review of agency Open Government Plans and other sources, we found that agencies have frequently used the five open innovation strategies shown below to collaborate with citizens and external parties, and encourage their participation in agency efforts. Figure 1 identifies and describes these strategies, and we provide further information about them in appendix II. Agencies can use these strategies singularly, or in combination as part of a larger open innovation initiative. For example, an open innovation initiative could primarily involve a prize competition or challenge that also has an idea generation component focused on the identification of promising new ideas or issues to be addressed. It could also have a component where participants are asked to use open data to develop new products or applications based on those ideas. We identified seven practices that federal agencies can use to help effectively design, implement, and assess open innovation initiatives. These practices are detailed below. We drew from our analysis of federal resources and relevant literature with suggested practices for the implementation of open innovation strategies. We also interviewed experts and agency officials with expertise in implementing such initiatives. While we present these practices in a certain order, this is not meant to imply they should be implemented in this sequence. Relevant literature, agency officials, and an expert we consulted emphasized that, in practice, agencies often take some of these actions concurrently or will use an iterative approach. Through our analysis of relevant literature and interviews with experts, we identified several factors agency officials should consider when selecting the most appropriate open innovation strategy or strategies to use for an initiative. First, agency officials considering the use of an open innovation strategy should clearly articulate the purpose(s) they hope to achieve by engaging the public. Through our literature review and interviews we found that agencies generally used open innovation strategies to achieve one or more of five high-level purposes. These purposes, which are not mutually exclusive, are summarized below in table 2. implement a strategy, including leadership support, legal authority, the availability of resources, and capacity. Description Agencies can collect the perspectives of a broad group of citizens and external stakeholders to identify problems or challenges, gauge perceptions of a program or service, gather reactions to proposed actions, or better understand their priorities, values and preferences. Agencies can then use this information to inform decisions about policies, plans, and the allocation of resources. Agencies can efficiently engage a broad range of citizens and external stakeholders in developing new ideas, solutions to specific problems, or new products ranging from software applications to physical devices. Agencies can also have them evaluate the quality and feasibility of the ideas and solutions proposed by others, or test the products that were developed. If it uses a successive or iterative process, the agency can help build the capacity of participants in these efforts to further develop or refine their ideas or products. Agencies can also use open innovation initiatives to stimulate the creation of new markets and companies that will then commercialize products and technologies developed for an initiative. Agencies can leverage the time, resources, and expertise of citizens and external stakeholders to supplement their own internal resources, data, and expertise. These contributions enhance the agency’s capacity, and therefore, its ability to achieve goals that would be more difficult to reach without this additional capacity or expertise. Open innovation initiatives may also allow agencies to achieve goals more efficiently and effectively than more traditional federal program types, such as grants or contracts. Agencies can establish or enhance collaboration among citizens and external stakeholders or organizations interested in an issue. This can be done, in part, by developing relationships among involved individuals and organizations. These relationships can then be leveraged to achieve common or complementary goals. Agencies can also enhance previously-established communities by using open innovation initiatives to strengthen existing relationships. This also can be done to bring new individuals and organizations into the community. An agency can provide participants or the broader public with balanced and objective information and data to help them understand an issue or problem. Information can also be provided to help them understand opportunities and various alternatives for addressing an issue or problem. To determine how frequently agencies identified these as purposes for each type of open innovation strategy, we identified both the primary strategy and the purposes agencies articulated for each initiative in their most recent open government plans. The results of this analysis are summarized below in figure 2. Through this analysis, we found that agencies identified certain purposes more frequently for different types of strategies. For example, we found that, of the 26 prize competitions or challenges identified in agency plans, agencies indicated that developing new ideas, products, or solutions was a specific purpose for 25 (or 96 percent) of the initiatives. Similarly, of the 74 open dialogue initiatives we identified in agency plans, agencies indicated that collecting information and perspectives was a specific purpose for 57 (77 percent) of them. In addition to the purpose(s) agency officials hope to achieve through open innovation, through our literature review and interviews we identified additional factors agency officials should consider when selecting the strategy or strategies that will be used: Leadership support: The support and approval of agency leaders for the potential use of an open innovation strategy is particularly important. Such leadership support can lend credibility and visibility, help generate support from others throughout the agency, and increase the likelihood an initiative will receive necessary approvals and resources. Legal authorities: Agency officials should work with their respective agencies’ legal staff to ensure that they have appropriate legal authority to use a strategy, and are aware of any relevant requirements that need to be met as they work to implement a strategy. For instance, the legal requirements that an agency must meet when conducting a prize competition or challenge can be more detailed and specific than those that apply to certain other open innovation strategies. Those considering a strategy should also be aware of any government-wide and agency-specific policies or guidance that can help guide planning and implementation of these tools. Resource needs and availability: Agency officials should also work with other relevant staff to understand what financial and information technology resources are necessary and available to support the use of various open innovation strategies. For example, agency officials could work with staff to understand whether they can design or leverage an existing website or other tool to engage and manage a community of widely-dispersed participants. Assessing resource needs and availability helps determine the costs and feasibility of implementing the selected strategy. Capacity to implement the strategy: Agency officials should consider whether their staff has sufficient time and expertise to design and implement a strategy. Agency officials could work with staff with prior experience developing and implementing open innovation initiatives. Such staff can help ensure successful practices from previous initiatives are replicated and previously-identified problems avoided. Similarly, officials can also work with agency contracting and acquisition staff to contract for additional capacity and expertise to support implementation. Below we provide illustrative examples of how NASA, EPA, and DOT selected and used various open innovation strategies to achieve specific purposes. As part of NASA’s strategic goal to expand the frontiers of knowledge, capability, and opportunity in space, the agency is examining near-Earth asteroids to determine whether any of these objects threaten Earth. In June 2013, NASA also announced its Asteroid Grand Challenge, which is a large-scale effort to use partnerships and collaboration to find all asteroid threats to human populations. NASA officials also reported that the algorithm that astronomers have been using to analyze images of space to detect asteroids can produce false detections, and the process to screen out those false detections is labor intensive and inefficient. According to NASA officials, beginning in January 2014, staff working on the Asteroid Grand Challenge began working with staff from NASA’s CoECI, who have expertise in executing prize competitions and challenges and are responsible for managing competitions launched through the NASA Tournament Lab. To ensure they could access necessary technical expertise to develop an improved algorithm to identify asteroids in images captured by ground-based telescopes, officials decided to leverage an existing NASA contract with Harvard University to carry out a series of competitions. These competitions were conducted by Harvard University’s subcontractor Topcoder, a private- sector company that administers contests in computer programming and has an existing community of expert developers and data scientists. In March 2014, NASA officially announced the Asteroid Data Hunter challenge and citizen science effort to develop the more accurate algorithm. The effort was also designed to develop a software application that would allow citizen scientists to genuinely contribute to asteroid detection, supplementing the efforts of professional astronomers. According to an April 2015 report from OSTP on the implementation of federal prize competitions and challenges, through the challenge’s 10 months, more than 1,200 participants submitted 700 potential solutions. This resulted in the development of a new algorithm and software package. Figure 3 provides a screenshot from the website where interested members of the public can download the application. According to NASA, the improved algorithm has led to a faster, more accurate asteroid detection process. NASA and Planetary Resources, Inc., a private-sector company also involved in the initiative, analyzed the results and found that the new algorithm resulted in a 15 percent increase in the positive identification of new asteroids in the main belt of asteroids that orbit between Mars and Jupiter. Furthermore, NASA also stated that the software application could increase the number of new asteroids discovered by citizen astronomers. According to NASA officials, the application has been downloaded over 8,000 times as of March 2016. NASA obtained these results with a total project cost of less than $200,000, which OSTP reported and NASA officials confirmed is less than the fully loaded cost of employing an engineer for the same time period. Excessive levels of nutrients such as nitrogen and phosphorus can harm aquatic environments, according to EPA. Governments, academic research organizations, environmental organizations, utilities, and the agriculture community are collecting data on nutrient levels. However, EPA and its partners say the general public cannot easily access or understand these data. To raise public awareness and identify new and innovative ways to communicate data on nutrient pollution to the public, EPA collaborated with the United States Geological Survey (USGS) and Blue Legacy International to conduct the Visualizing Nutrients Challenge. USGS is a scientific organization within the Department of the Interior that collects and distributes scientific data and information on the health of ecosystems and the environment. Blue Legacy International is a non- profit organization focused on the protection of water resources. The goal of the challenge, which ran from April to June 2015, was to invite participants to design innovative and compelling web applications, images, and videos to help individuals and communities understand the causes and consequences of, and solutions to, nutrient pollution. According to EPA officials, as the idea came together for an effort to identify innovative ways to translate and communicate information about nutrient pollution, EPA staff reached out to colleagues at USGS to gauge their interest in partnering. EPA officials said they did this because of the role USGS plays in collecting data on the nation’s surface and ground waters, and their interest in seeing those data communicated and used more broadly. According to EPA officials, this relationship with USGS was important because of the additional expertise and capacity USGS staff provided, as well as their support in publicizing the challenge. EPA officials explained that because of Blue Legacy International’s mission and interest in using digital media to build public awareness about the importance of local watersheds and more sustainable stewardship of water resources, it approached EPA about becoming involved. EPA officials also stated that Blue Legacy International provided $10,000 to fund its own independently-selected awards to help incentivize participation. According to EPA officials, they determined that conducting a challenge with an open call for submissions would be the preferred approach to achieve the goals established for the effort. Before the challenge could move forward it had to be reviewed and approved by all members of EPA’s Challenge Review Team. EPA officials explained that the review team consists of representatives from key offices throughout EPA. This includes individuals from the Office of General Counsel, who determine whether there is sufficient statutory authority to carry out a challenge, and the Office of the Chief Financial Officer, who ensure there are sufficient financial resources available to support the challenge. To secure additional capacity to implement the challenge, EPA contracted with InnoCentive, a private-sector contractor that manages prize competitions and challenges. According to EPA officials, they also used the contract to access InnoCentive’s large existing network of potential challenge participants with expertise in relevant disciplines, including design, physical science, and data analysis. InnoCentive played a central role by recruiting potential participants, assisting with design and development, and prioritizing issues that needed to be addressed each week by EPA, USGS, and Blue Legacy International. According to EPA officials, a competition was selected because it offered a superior cost-benefit ratio to more traditional federal contracting. According to an August 2016 report from OSTP, using this approach, EPA and USGS were able to collect 20 submissions. EPA officials said these submissions provided a wide range of examples for how to present and communicate data on nutrient pollution. They also said they achieved this in approximately 3 months and at the cost of staff time—with responsibilities shared among EPA and USGS, and Blue Legacy International—and $16,500 that EPA paid to InnoCentive to administer the competition. By contrast, EPA officials estimated that using traditional procurement processes to produce a single visualization would have cost significantly more and taken longer. In addition, they said a more traditional procurement may not have resulted in a product of the quality that was received through the competition. The Moving Ahead for Progress in the 21st Century Act (MAP-21), signed into law in July 2012, required DOT to develop a National Freight Strategic Plan in consultation with stakeholders. As we have reported, involving stakeholders in strategic planning can help ensure that efforts and resources are targeted at the highest priorities, and that stakeholders appreciate how competing demands and resource limitations require careful balancing. To inform the development of the freight strategic plan, DOT officials, led by staff from the Office of the Secretary, the Office of Public Engagement, and the Federal Highway Administration (FHWA) Office of Freight Management and Operations, decided to engage a broad range of stakeholders through a series of both online and in-person open dialogues. For example, beginning in 2012, DOT used an online platform called IdeaScale to launch an online dialogue and roundtables to leverage web-based communications technology to engage with stakeholders. According to DOT officials, the online dialogue session and online roundtables allowed stakeholders to comment and provide suggestions on various topics, including developing guidance for state freight plans and potential measures of conditions and performance for a national freight system. FHWA has also continued to conduct monthly webinars to provide information on freight issues, technical assistance, and training for those in the freight and transportation planning communities. Since 2012 these webinars have been used to cover a range of topics, including freight-related provisions in MAP-21 and other legislation, state freight planning, and improving freight system performance in metropolitan areas. In addition to its web-based outreach, DOT also used in-person meetings to engage with and collect recommendations from a range of stakeholders. For example, in May 2013 the then-Secretary of Transportation chartered the National Freight Advisory Committee (NFAC), which was comprised of 47 stakeholders from different organizations and groups with an interest in freight policy. It included representatives from state and local governments, port and transportation authorities, transportation-related companies and associations, unions, and public interest groups. DOT officials emphasized that NFAC was created to advise the department on matters related to freight transportation. They added that it was critical to ensure a wide range of perspectives would be represented. NFAC met in person 7 times between June 2013 and November 2015, and ultimately provided DOT with nearly 100 recommendations. DOT leaders also conducted nearly 60 roundtables and public meetings across the country to collect the perspective of stakeholders at the regional and local levels on various freight policy issues. According to DOT officials, the insights collected through this outreach had a large influence on the development of the draft National Freight Strategic Plan, which was released in October 2015. DOT officials told us that they received substantial public input on issues such as freight transportation safety, the adoption of new technologies, workforce development, opportunities to strengthen connections between different modes of transportation, and the need for reliable funding for freight infrastructure. Each of these issues was then addressed in specific sections of the draft freight strategic plan. DOT officials stated that these insights also informed recent action by Congress. Specifically, in December 2015, Congress enacted and the President signed into law the Fixing America’s Surface Transportation (FAST) Act which created a new grant program for nationally significant freight and highway projects and authorized appropriations for this new program as well as existing grant programs through fiscal year 2020, among other things. According to relevant literature and our interviews with experts and agency officials, once the agency has identified the high-level purposes it wants to achieve through an open innovation initiative and selected the strategy or strategies it will use, it should clearly define specific and measurable goals for the initiative. Specific goals can help guide the design and implementation of an initiative. They also can help those involved maintain a sense of direction by providing a clear understanding of what they are working to achieve. Define specific and measurable goals for the initiative. Identify performance measures to assess progress. Align the goals of the initiative w ith the agency’s broader mission and goals. Relevant literature, experts, and agency officials we consulted highlighted that the agency should also identify the performance measures it will use to assess progress towards the goals and overall results. For open innovation initiatives, measures can be used to assess the achievement of specific outcomes, participation and engagement, and resources invested in the initiative. Outcome measures could include the successful achievement of a goal, improvements in the quality of a policy or process, or the improved delivery of a service. Participation and engagement measures could include the number or diversity of participants engaged in the initiative; the number of ideas submitted; the amount of time it takes to respond to participant questions, comments, or feedback; and the satisfaction of participants with their experience. Measures of resources invested (input measures) could include the money, staff resources, and time dedicated to implementing the initiative. This information can also help an agency determine whether it would be appropriate to expand—or “scale”—an approach if it is found to be successful. Lastly, the literature and experts also emphasized that the agency should seek to align the specific goals of an open innovation initiative with the agency’s broader mission and goals. Aligning initiative-specific goals with agency priorities can help ensure the relevance and value of an initiative, by showing how its successful implementation could advance progress on the agency’s mission and goals. This alignment also reinforces the connection between the agency’s mission and goals and the day-to-day activities of those carrying out an initiative. The following two examples illustrate how DOE and EPA defined goals and performance measures for selected open innovation initiatives. According to an official in DOE’s Wind and Water Power Technologies Office (WWPTO), its Wave Energy Prize (WEP) competition is designed to dramatically improve devices that produce electricity by capturing energy from ocean waves. WEP began in April 2015 and is scheduled to conclude in November 2016. WWPTO specified in its contest documentation that the effort could stimulate private sector innovation and contribute to energy security and international competitiveness in the wave energy conversion sector. This was aligned with DOE’s strategic objective to support a more economically competitive, environmentally responsible, secure, and resilient U.S. energy infrastructure. During the planning phase, WWPTO established a specific, measurable goal in its rules for the competition. The goal required that devices developed for the competition at least double the energy capture of current technology. According to a DOE National Laboratories analysis, the average rate of wave energy capture for a group of current devices is 1.5m/$M (or 1.5 meters per million dollars). To be eligible for a monetary prize, which will range from $1.5 million for the winning team to $250,000 for the third place team, participants would have to develop a device that would achieve 3m/$M. WWPTO officials told us that this target gave participants a clear, achievable goal for which to strive. They added that the goal also was aggressive enough to represent a ground-breaking advancement over current technology. Although the competition is still ongoing, according to information on the contest website, WEP has demonstrated early success as a number of the teams are proposing innovative technologies and have demonstrated a potential to achieve or exceed WWPTO’s stated goal. To help guide its outreach efforts, WWPTO also established a goal to alert potential participants about the WEP, and have them take action by registering to participate. WWPTO officials and the prize administration team developed a detailed Communications and Outreach Plan for the competition. The plan outlined the types of metrics that could be tracked to determine the effectiveness of its outreach efforts. These metrics include the number of registered teams, and traffic to the competition website and social media pages. According to WWPTO officials, 92 teams registered to participate in the competition thanks to their aggressive communications and outreach strategy. This number was three times more than they had initially expected. EPA has a strategic objective to protect and restore watersheds and aquatic ecosystems, and has reported that it is working with external partners and stakeholders to spur technological innovations to reduce costs and pollution through improved and less-expensive monitoring. In 2013, OSTP convened the Challenging Nutrients Coalition (CNC). CNC is a group of federal agencies, including EPA, nongovernmental organizations, and academia, working together to address the issue of nutrient pollution. In November 2013, OSTP hosted a meeting of agencies and experts familiar with nutrient pollution. According to EPA, experts found that more affordable and reliable sensors are needed to collect more data on nutrient levels to inform decisions about how to manage and reduce these levels. In December 2014, the Nutrient Sensor Challenge was announced, led by EPA and supported by the National Oceanic and Atmospheric Administration (NOAA) and other agencies. The goal of the Nutrient Sensor Challenge is to accelerate the commercial development of accurate, reliable, and affordable devices that will meet user needs and be available for purchase by 2017. According to EPA officials, EPA aligned the goals of the challenge with EPA’s strategic objective. The challenge offers participants non-monetary rewards and incentives like visibility in an emerging market and access to testing services and other resources. In June 2014, the Partnership on Technology Innovation and the Environment, another member of the CNC, conducted a study to clarify the specific needs of potential sensor users. Through this study they identified standards for accuracy, precision, and cost that the vast majority of potential users would look for in devices. These became the technical requirements that devices developed for the challenge must meet to be eligible for awards. For example, most of the study’s participants identified the $1,000-to-$5,000 price range as affordable for their purposes. For this reason, EPA required that the devices built for the competition have a purchase price of less than $5,000. As of August 2016, EPA and its partners are conducting final testing on the devices submitted by participants to determine if any meet the technical requirements, and plan to announce final awards in December 2016. However, EPA officials stated that preliminary results indicate that the devices developed through the competition will meet the technical requirements that have been established. They added that several companies are developing instruments of similar capabilities and price outside of the challenge. Another goal of the competition is to produce an identified, mobilized market of community organizations, state and federal agencies, and researchers. According to EPA officials, EPA and other CNC partners, including USGS, NOAA, and the National Institute for Standards and Technology, are creating pilot programs that will allow organizations to deploy and test these sensors following the completion of the competition in late 2016. According to EPA officials, as of March 2016, 14 organizations have expressed interest in participating in EPA’s pilot program. EPA officials also stated that this pilot program will help identify organizations that may want to purchase and deploy the sensors in a more widespread way in the future. EPA officials stated that having these specific goals has been critical given the focus that they have provided. For example, the goals will help ensure that the devices developed through the challenge serve as the reliable and affordable devices necessary to stimulate the market, and to expand how widely they are deployed. Identify and engage outside stakeholders interested in the issue addressed by the initiative. Look for opportunities to partner w ith organizations on the design and implementation of the initiative. Our literature review and agency officials highlighted the importance of identifying and engaging with external stakeholders who share an interest in the issue being addressed and may already be active in related efforts. For a federal agency, external stakeholders can include representatives of relevant non-profit organizations and foundations, community or citizens’ groups, universities and academic institutions, the private sector, members of Congress and their staffs, other federal agencies, and state and local governments. By engaging with outside stakeholders, agencies can gain their support for the initiative, gain insights from their prior experience working on an issue, and see how they might use the results (e.g., products) of an initiative. This can help clarify the goals and design of an initiative. This engagement can also be used to determine what motivates stakeholders to get involved in an effort, and to identify additional stakeholders, partners, or potential participants to engage in the initiative. The literature, experts, and agency officials also emphasized that agencies should look for opportunities to partner with other groups and organizations that would be interested in, or could benefit from, the results of an open innovation initiative. Partners are organizations and individuals that play a direct role in designing and implementing an initiative. They provide staff capacity, resources, administrative and logistical support, assistance with communications and community building, or ongoing advice and expertise. Partner organizations provide these resources and assistance because they have missions or goals that overlap or align with what the agency wants to achieve through an open innovation initiative. Agencies can also consider the most appropriate and effective mechanism for formalizing these partnerships, such as collaboration agreements, contracts, or interagency agreements. Agency officials can identify partner organizations through discussions with external stakeholders, professional contacts, or research into organizations with complementary goals. Finally, agency officials we interviewed emphasized the especially important role that agency leaders can play with respect to this practice. The support of agency leaders can be particularly important, as their involvement can lend credibility and visibility to an initiative to those outside the agency. It can also help mobilize a broader community of external stakeholders and partner organizations. Below we provide illustrative examples of how DOT, HUD, EPA, and HHS identified and engaged external stakeholders and partners for three open innovation initiatives. The Federal Highway Administration’s (FHWA) Every Day Counts (EDC) is an example of an ideation initiative. EDC is designed to identify effective, market-ready innovations states could implement to improve highway project delivery. According to an FHWA official, from the beginning of the initiative in 2009, the then-FHWA Administrator and Deputy Administrator (who are now Deputy U.S. Secretary of Transportation and FHWA Administrator respectively) established and supported EDC as a state-based, stakeholder-driven program. They established the Center for Accelerating Innovation (Center) to implement the program, and worked with internal and external stakeholders to promote the idea of using innovative practices to improve how highway construction projects are performed. Every 2 years, FHWA works with various stakeholders to identify innovative technologies and practices that merit more widespread deployment through EDC. The process begins when FHWA publishes a Request for Information inviting suggestions for new innovations to consider from state, local, tribal, and industry experts. According to FHWA officials, the agency typically receives more than 100 suggestions and comments. FHWA staff review these submissions to develop a list of those innovations that are market ready, could be implemented across the country, and have the greatest potential to improve efficiency and quality in highway transportation and construction. According to an FHWA official, once this list of EDC innovations is finalized, the Center works with FHWA program offices to identify leaders for Innovation Deployment Teams. The deployment team leaders identify other team members, such as communication specialists, subject matter and technical experts from state transportation agencies, and key stakeholders like industry representatives. The deployment teams work with state transportation agencies and other stakeholders to implement the innovations that best fit their needs by providing technical assistance, training, and outreach. Once the EDC innovations are selected, transportation leaders from across the country gather at regional summits to learn about and discuss the innovations. According to a March 2015 report from FHWA, the summits are used to disseminate information on innovations so states can identify those that best fit the needs of their highway programs. The summits include interactive working sessions to foster connections among regional transportation professionals, and encourage longer-term collaboration on the deployment of innovative practices. In 2014, the summits introduced online broadcasts of the presentations and discussions so that a wider audience could participate. The President’s Hurricane Sandy Rebuilding Task Force launched Rebuild by Design (RBD), a prize competition overseen by HUD, in June 2013 to generate innovative and implementable design ideas to rebuild communities affected by Hurricane Sandy. According to HUD officials, HUD searched for external organizations and foundations with complementary missions to partner with on implementing RBD. In particular, it sought established organizations with resources, capabilities to administer a design competition, and the ability to engage local residents and stakeholders in affected communities. Several philanthropic organizations, including the Rockefeller Foundation, provided financial support to fund the administration of the competition, $200,000 cash prize awards to finalist design teams, and project evaluation. According to a 2014 evaluation of RBD conducted by the Rockefeller Foundation and HUD officials, direct outreach to potential philanthropic partners by the then-Secretary of HUD played a key role in securing their financial commitments. To help administer the competition, HUD also partnered with four local research and advocacy organizations to support the work of RBD design teams at the local level. Figure 4 summarizes the network of organizations involved in RBD. According to HUD officials, each administering partner organization was chosen for its complementary resources and expertise in research, design competitions, community outreach, regional planning and design, and local ties to the region. HUD staff also established a management plan early in the process that outlined roles and responsibilities for how these partner organizations would work together through each stage of the competition. According to HUD officials, this partnership with local organizations supporting the competition’s implementation was critical to RBD’s success. HUD officials were unfamiliar with local networks of community groups and other relevant organizations in each region, so the ability to partner with those that had knowledge, networks, and skills that HUD could leverage was valuable. These networks helped facilitate community engagement by design teams, who used meetings, community design workshops, site visits, and social media to engage hundreds of local stakeholder groups from communities affected by Hurricane Sandy. According to HUD officials, this outreach was critical to meet HUD’s expectation that projects receiving support be co-designed with communities, have local support, and be financially viable. They also said that RBD demonstrated the value that external partnerships can bring in providing expertise, capacity, and connections that help an agency achieve its mission and goals. According to EPA and HHS officials, both agencies shared an interest in developing affordable, wearable sensors that would provide wearers with information on air quality and the body’s reaction to it. The agencies jointly sponsored the My Air, My Health challenge, asking participants to develop a device that would do these things in tandem. The challenge was held in two phases, and ran from June 2012 to June 2013. According to EPA officials, EPA and HHS created a cross-agency design team that included experts from EPA’s Offices of Air and Radiation and Research and Development, and the National Institutes of Health (NIH), a medical research agency within HHS. Within that design team, one cross- agency work group focused on identifying the air pollutants and health concerns the competition would target, while another work group focused on the technology and how the devices would communicate health data. According to EPA officials, creating this collaborative design team helped ensure key subject matter experts from each agency could guide the development of technical requirements for the competition in a way that would address the shared goals of each agency. According to an HHS official, for example, during the development of these technical requirements, EPA staff identified what air quality data would need to be collected, while HHS staff identified what would need to be measured to determine the health effects of exposure to air pollution. According to EPA officials, the agencies shared responsibilities for implementing the competition’s phases. EPA implemented the first phase of the competition, which was focused on developing plans and proposals for prototypes. HHS then implemented the second phase, in which finalists developed and validated proposed prototypes. EPA and HHS officials told us that the agencies used the competition to communicate their shared interest in the technology and encourage further private-sector development. The agencies used My Air, My Health to demonstrate that open innovation initiatives involving partnerships between agencies were feasible, and that collaboration between agencies and with the private sector can allow agencies to achieve goals that they may not have the capability to achieve alone. Relevant literature and agency officials highlighted how important it is for agencies to ensure that roles, responsibilities, expectations, and time frames are clear for all involved in implementing and managing an initiative. The agency and any of its partners can do this by establishing and documenting a governance structure for the initiative that clarifies the processes that will be used to ensure regular communication; raise, discuss, and resolve any pressing issues; and make decisions. According to our literature review and interviews with experts and agency officials, the agency and any partners should develop a detailed implementation plan for the initiative that clearly identifies the specific tasks and actions needed to carry out the initiative, the parties responsible for completing them, and the timeframes for doing so; potential participant groups to engage in the initiative, including when and how the agency and any partners will reach out to various participant groups and encourage them to participate, and how they will engage with participants during and after the initiative’s implementation; and what data will be collected, and how, during and after implementation, and how the data will be evaluated to determine overall results and progress towards the initiative’s stated goals. The following two examples show how HUD and HHS developed plans for implementing and recruiting participants for selected open innovation initiatives. Switchboard is an online idea generation initiative that HUD uses to collect ideas from citizens, stakeholders, and HUD staff on how the agency can improve its processes, programs, and administration. HUD officials can then consider these ideas for potential implementation. HUD drafted a charter in 2011 to guide the initiative’s implementation that describes the overall team structure, defines the roles and responsibilities of each staff member involved in reviewing and responding to ideas submitted through the website, and names liaisons for program offices throughout HUD to review and respond to ideas that fall within their programmatic jurisdiction. See table 3 for a summary of the roles and responsibilities from the Switchboard charter. Table 3. Information on Roles and Responsibilities from HUD’s Switchboard Charter Responsibility Champion of the project. Approval and sign off of project components and requirements. Overall ownership of project from an organizational perspective; management of budget. Overall management of the project timelines and scope. Oversight of internal and external communications; sets direction for messaging. Manages day-to-day activities of project. Provide input into process, manage ideas and responses. The charter also explains the process and criteria used to evaluate an idea, and determine whether it should be elevated for consideration and potential implementation. HUD supplemented this charter with a document outlining policies and procedures for investigating, responding to, and implementing an idea. Figure 5 summarizes these procedures. According to HUD staff, Switchboard has become a tool for more effective customer service by providing an easy way for anyone to contact HUD with ideas for how the agency could do things more effectively. It has also provided the agency with a platform to host specific issue forums that are sponsored by various HUD program offices and targeted toward specific segments of the public. For example, in 2011, the HUD Office of HIV/AIDS Housing used Switchboard (then called HUD Ideas in Action) to ask for public input on how HUD should update the Housing Opportunities for Persons with AIDS program funding formula to better target resources to need. In response to this request, HUD received 17 submissions with ideas—many of which generated additional comments from participants in the forum—and a total of more than 500 votes. HUD then selected four of these submissions for further review, and incorporated recommendations from one of them into the department’s fiscal year 2013 budget request. The Neuro Startup Challenge was created by NIH and the Center for Advancing Innovation (CAI), a non-profit organization with a mission to accelerate knowledge and technology transfer, and entrepreneurship. Conducted from April 2014 to August 2015, the challenge was designed to generate promising start-up companies with business plans to commercialize NIH inventions for use in treating brain and neurological disorders. According to the collaboration agreement between NIH and CAI, the challenge supported NIH’s mission to advance research, innovation, and education to protect public health. It also aligned with CAI’s goals to encourage the commercialization of new technologies. NIH and CAI used this collaboration agreement to outline a detailed governance structure that specified the roles each organization would play in implementing the competition. The agreement also identified the respective tasks each would be responsible for completing during the various phases of the competition, along with the timeframes for each phase. For example, the agreement specified that during the planning phase of the competition, which was scheduled to run from April to August 2014, CAI would be responsible for identifying and engaging stakeholders and potential participants, as well as other deliverables, including the development of an advertising and marketing plan for the competition. The agreement also specified that NIH would provide input on the rules and criteria for the competition, the selection of inventions, and the identification of potential participants. According to an NIH official, this delineation of responsibilities was particularly important to help frame and focus efforts at the beginning of the project. In the agreement, NIH and CAI also identified the potential participants they wanted to reach through the competition. Participants included graduate and post-doctoral students and experienced entrepreneurs. According to an NIH official, NIH and CAI particularly focused on engaging those affiliated with universities, given the focus on connecting university students with real-world experience in business planning. Prior to launching the initiative, CAI planned for extensive contact with university faculty and students to get feedback on the concept and to make them aware of the challenge. CAI then conducted an extensive series of phone conversations and in-person meetings to connect with stakeholders and potential participants at 37 universities in 14 states. Through this outreach they reached approximately 1,500 people with information on the challenge. According to NIH officials, many of the more than 70 teams that participated in the competition were from those universities contacted through this outreach. CAI also reached out to local economic development groups and universities to identify entrepreneurs and business developers who would be interested in supporting participating teams. Relevant literature, experts, and agency officials emphasized that when agencies are ready to move forward with implementation, they should announce the initiative in a way that generates interest among potential participants. This involves using multiple outlets and venues—including the initiative website, social media, press releases, press conferences, journals, newsletters, and professional conferences and networks—to ensure they reach the right potential participants and make them aware of the initiative. The participants that an agency and any partners seek to engage, and how they decide to solicit participation, will vary depending on the purposes of the initiative. For instance, if an agency wants to use an initiative to address a very specific technical issue it may attempt to identify and engage individuals with the requisite skills through an existing network of experts. However, if an agency intends to use an initiative to collect a wide range of perspectives on an issue, it will likely need to be much more open and inclusive in its outreach and encourage diverse groups to participate. Efforts to promote the initiative are important because reaching the right participants and motivating them to participate is critical to the overall success of an initiative. According to the literature and our interviews, the initial outreach to potential participants should be crafted and communicated in a way that responds to the interests and motivations of potential participants, and explains why it is important for them to participate. In addition, the agency should also establish clear expectations for participants, describing in detail what they will be expected to contribute; how and when their contributions will be collected, evaluated, and used; and what participants must do to receive any monetary or non-monetary incentives that may be provided. Once the initiative begins, the agency and any partners should use websites, question-and-answer sessions, emails, and other forms of communication to keep participants apprised of progress. Through the literature and our interviews we also found that agencies and their partners can actively engage participants to solicit and respond to any questions, comments, and feedback, and provide any necessary assistance. These actions can increase the likelihood that participants will have a positive experience, and can help show that their participation and contributions are valued. According to experts and agency officials with whom we consulted, however, doing this can be a very resource-intensive activity, particularly if the initiative has a large number of participants and there is a high volume of communication from participants. Therefore, during the planning phase, the agency and any partners should work together to ensure that the party responsible for this aspect of implementation has sufficient capacity to respond in a timely fashion. Agency officials highlighted that the agency and any partners should also use regular check-ins to discuss the progress of the initiative. Such check-ins can help ensure those involved in implementation know the status of specific implementation tasks against established time frames, and any decisions that may be needed. The agency and partners should also review the data and feedback that are being collected during implementation. This will allow them to identify and make any necessary adjustments to improve implementation and the experience of the participants. As illustrated below, HUD, DOE, and HHS engaged participants and partners during the implementation of three open innovation initiatives. HUD’s objective for its outreach to potential participants for Rebuild by Design (RBD), according to an April 2015 report from OSTP on the implementation of federal prize competitions and HUD officials, was to recruit world class design talent to participate in the competition. It used its network of project partners, professional associations, university programs, as well as websites focused on planning, design, and urban issues, to promote the competition. For example, the American Institute of Architects launched a communications campaign urging its membership to participate in RBD. According to the April 2015 OSTP report, this outreach was successful, as HUD ultimately received high-quality proposals from 148 teams representing top engineering, architecture, and design firms. According to HUD officials, after 10 design teams were selected to participate in RBD, HUD and its partners regularly communicated with the teams to identify challenges they faced and assistance that they needed. HUD officials explained that RBD was designed to allow more than one winner, as each finalist team worked to develop innovative approaches for rebuilding and resilience in a different community. As a result, the RBD management team facilitated collaboration between the design teams. This allowed the teams to share good practices and learn from each other’s experiences. According to a 2014 evaluation of RBD conducted by the Rockefeller Foundation, HUD’s local administering partners supporting RBD’s implementation also worked closely with the design teams and provided logistical support and connections to community-based organizations and public officials. To ensure clarity about reporting requirements and deadlines, those managing RBD also instituted other means of communication. This included biweekly memorandums for the design teams and weekly phone and e-mail communications with partner organizations providing support to teams at the local level. The Rockefeller Foundation also reported that effective management practices and regular communication allowed the design teams to meet all procedural deadlines and milestones despite the initiative’s fast pace and logistical challenges. In the Communications and Outreach Plan developed for the Wave Energy Prize (WEP), DOE’s Wind and Water Power Technologies Office (WWPTO) set a goal to expand the community of developers involved in wave energy conversion technology. It sought to do this by drawing in both experienced energy device developers and newcomers representing a diverse group of companies, universities, and individuals. According to WWPTO officials, to generate a large pool of new and experienced developers for the competition, which began in April 2015 and is scheduled to conclude in November 2016, they used multiple outlets and venues to encourage WEP participation. As outlined in the Communications and Outreach plan for the competition, this included the WEP website, social media, email marketing, presentations, and outreach to various media outlets to reach a broad range of potential participants. Communications used to recruit participants also emphasized several key messages to motivate interested individuals and teams to participate. These messages included the availability of a monetary prize, the opportunity to help solve a difficult technological problem, and the chance to work on technologies that could contribute to the nation’s energy independence. See figure 6 for examples of these communications. According to WWPTO officials, to ensure there would be participants with technical expertise in energy production technology, WWPTO officials reached out to individuals who previously had contacted WWPTO regarding other projects involving wind and water power. WWPTO also promoted the competition through specific industry publications, outreach to professional and academic organizations focused on relevant technical specialties, and presentations at energy technology-oriented conferences. According to WWPTO officials, through this outreach, they attracted both new and experienced developers to participate in WEP. Of the 92 teams that registered to participate in WEP, most were previously unknown to WWPTO. Furthermore, out of the nine finalists and two alternates that were chosen to participate in the final phases of the competition, only two had received any prior funding from WWPTO. WWPTO officials also reported that they were successful in reaching teams with sufficient technical expertise to reach aggressive technical goals. According to information on the competition website from March 2016, while the devices of finalist teams are currently undergoing final building and testing, preliminary evaluations indicate that many of them could achieve or exceed WWPTO’s goals for the competition. According to WWPTO officials, the prize administration team has also created processes to regularly engage with teams participating in the competition. For example, the prize administration team holds biweekly calls with participating teams and technical experts. These calls prepare them for the final testing program, solicit and respond to participants’ questions and comments, and provide any necessary technical assistance. According to WWPTO officials, these interactions can be time- and resource intensive, so they planned for them during the early phases of the competition. This ensured that the prize administration team allocated sufficient resources to fulfill their participant management responsibilities. Furthermore, WWPTO and the prize administration team also hold weekly conference calls to discuss progress on key tasks and any adjustments that may be needed. These check-ins help ensure that WWPTO and the prize administration team are working from a common set of expectations. It also allows WWPTO to provide the prize administration team with any necessary information it needs to successfully implement WEP. OpenFDA is an open data platform released by the Food and Drug Administration (FDA) in June 2014. FDA, an agency within HHS responsible for assuring the safety of drugs, medical devices, and food, uses OpenFDA to make several key datasets available in a format that allows researchers and developers to more easily use the data. According to an August 2014 report from Iodine, a private health data company that assisted FDA in the development of OpenFDA, as the platform was developed and became available for testing, FDA officials actively engaged potential users. The officials solicited input from a group of individuals and organizations that had expressed interest in the platform and were willing to contribute feedback. The report also stated that FDA officials observed that some of those testing the platform had difficulty using it. As a result, FDA took actions to make the platform more user friendly. These actions included adding an interactive tool that allows users to filter and visualize the data more intuitively. According to an FDA official, these changes permitted OpenFDA users without technical expertise to more easily use and benefit from the platform. In addition, FDA officials actively monitored the online forums created for users of OpenFDA, and responded to any requests for clarity or information. According to FDA officials, engagement with users has been a priority. Through direct contact with the community of users, the agency has collected information to help ensure OpenFDA will serve their needs. For instance, in December 2015, FDA made the data on OpenFDA available for direct download as a result of requests from users. In June 2016, FDA also launched an updated version of OpenFDA that was redesigned in response to user feedback. This feedback included the need to improve the website’s layout. Relevant literature and agency officials emphasized that after the initiative has concluded, or at regular intervals if it is a long-standing or continuous effort, the agency should assess whether the initiative has achieved its goals. By analyzing the data it has collected, including quantitative performance data and qualitative data provided by participants on the effects of an initiative, the agency can determine if it has met its goals. When a goal is unmet, the agency should conduct additional analyses to understand why. In addition, because some outcomes may not be observable until months or years later, agencies can consider whether a long-term monitoring or assessment plan is needed and appropriate. According to relevant literature we reviewed, the agency should also conduct an after-action review to analyze feedback from partners and participants. Such a review can help identify lessons learned and process improvements that could be applied in future initiatives. For example, participant feedback may provide insights on parts of the process that went well and others that could have been executed better. These can then be replicated or adjusted, accordingly, for reoccurring or similar initiatives in the future. The agency can also engage with partners to review planning and implementation activities to identify what worked well and any notable gaps or challenges that may need to be addressed in future initiatives. Lastly, relevant literature and experts emphasized that once the agency has assessed the initiative it should publicly report on the results achieved and lessons learned. This transparency can help build trust with partners and participants, demonstrate the value of open innovation initiatives to other stakeholders and the public, and build momentum for future initiatives. Reporting results while partners and participants are still engaged can also help sustain a dialogue and increase awareness within the community of interested organizations and individuals. For the following three open innovation initiatives, we present how DOT, NASA, and DOE collected data, and assessed and reported results. The Federal Highway Administration’s (FHWA) Every Day Counts (EDC) initiative focuses on ensuring that proven innovations to improve highway construction and safety are quickly and broadly deployed. FHWA launched EDC in 2009. FHWA tracks progress toward this goal primarily by measuring the number of states that are deploying specific innovations being supported by EDC, along with whether the innovation is being developed, tested, assessed, or adopted as a standard practice. According to FHWA officials, staff from the Center for Accelerating Innovation (Center), which is responsible for implementing EDC, and deployment teams use this data to track how the level of deployment compares with goals established at the beginning of each 2-year cycle. Figure 7 shows the January 2015 baseline data for the e-Construction innovation, the progress made through December 2015, and the overall goal the agency is working to achieve by December 2016. According to an FHWA official, staff from the Center work with deployment teams to develop implementation plans for each innovation, which include identifying interim performance goals that will be used by the team to track implementation progress. FHWA officials say setting specific performance goals for deployment helps to ensure accountability for the advancement of innovations. For instance, the Director of the Center meets with the leader of each deployment team each quarter to review progress toward established goals. According to an FHWA official, these review meetings can result in the provision of additional resources or assistance to deployment teams, or, in some circumstances, adjustments to team leadership. FHWA has also established regular reporting cycles for EDC. It releases two progress reports each year that summarize the status of each innovation. In addition, it has also produced a final report at the end of previous two-year cycles summarizing the highway community’s accomplishments and progress. The final report includes data on how widely each innovation was deployed, accomplishments in states where innovations were deployed, and explanations of benefits and lessons learned through implementation. According to FHWA officials, publicly reporting results increases transparency and shows the effects of the EDC program. It also highlights successes achieved by state and local agencies in deploying innovations faster. For instance, FHWA reported in its July to December 2015 EDC progress report that the program has accelerated the deployment of innovations across the country. Every state implemented at least 8 of the 38 innovations promoted under the initiative since 2010, while some have adopted over 20. Furthermore, in August 2014, FHWA released a report with examples demonstrating that implementing EDC innovations has had significant and measurable effects in participating states. For example, FHWA reported that deploying Accelerated Bridge Construction as an EDC innovation has allowed states to reduce the time it takes to plan and construct bridges by years. This significantly reduces traffic delays, road closures, and often project costs. In 2015, Congress enacted and the President signed into law a requirement that FHWA continue to use EDC to work with states, local transportation agencies, and industry stakeholders to identify and deploy proven innovative practices and products. NASA worked with Expert and Citizen Assessment of Science and Technology (ECAST), a network of institutions that encourages public input on science and technology policy issues, to solicit the views of citizens on options for defending the Earth against an asteroid strike and exploring asteroids. These in-person and online forums, known collectively as the Asteroid Initiative Citizen Forums, took place in November 2014 and February 2015. The forums were used to obtain information on participant preferences, priorities, and values. NASA officials used this input to inform, among other things, decisions about its future mission and technology investment goals. This includes detecting asteroids, mitigating asteroid threats, and exploring asteroids with astronauts. For example, after the forums were held, relevant results were shared with NASA managers to inform the selection of a specific technology and approach that would be used for a future mission to capture an asteroid. According to NASA officials, the results of these forums provided NASA with insights into public understanding and views on NASA’s asteroid work. Figure 8 illustrates the platforms used for both the in-person and online forums. According to NASA officials, NASA also wanted to use the forums to identify lessons that could guide its future efforts to engage citizens. To do this, ECAST had participants complete post-forum surveys and provide written comments on their experience in the forum. ECAST then analyzed this information. Observers at selected tables also helped assess the meaning of written comments from participants. Through an after-action review, ECAST identified a small number of issues to address in preparation for any future forums. These included insights into the ability of citizens to understand complex information, and the need to provide clearer information to participants about how the forum results would be used. Members of ECAST involved in designing and implementing the forums also summarized their observations on potential refinements in their final report to NASA. For example, ECAST found that connecting attendee background information to individual responses could have also provided context for interpreting the written results. ECAST members also found they needed more time to test background materials given to participants to read before the event, and needed to take additional steps to increase consistency across table facilitators. DOE’s SunShot Catalyst initiative (Catalyst) was a series of competitions first begun in May 2014. It was designed to engage entrepreneurs, solar professionals, and software and data experts to help them rapidly develop start-up companies with viable technologies to address identified challenges in the solar and energy efficiency markets. By providing intensive training and support to those with the most promising ideas, DOE officials also wanted to ensure that teams would have market-ready innovations and viable business plans at the end of the competition. According to DOE officials, DOE selected 35 teams to participate in the initiative. According to a DOE official, in order to determine the initiative’s effectiveness, DOE developed a long-term effort to monitor the status of the companies created through the competition. DOE officials said that they collected publicly-available information on the status of the 35 teams that participated in Catalyst. DOE officials reported in June 2016 that through collecting this information they found that 28 teams were still actively pursuing their startups. DOE officials also invited all 35 teams to one-on-one discussions, and were able to meet with 24 of them. Through these discussions, DOE collected information on the amount of capital the teams had raised, projected annual revenues, and the benefits they gained from participating in Catalyst. For example, the 24 teams reported that they had collectively raised a total of $6.4 million in private capital or public funding, had 95 full-time employees, and had total expected annual revenue of $5.6 million. As DOE also reported, officials also identified specific lessons learned at each stage of the Catalyst process that can be used to inform how future competitions are conducted, and improve and expand the Catalyst program. For example, DOE reported that the most effective way to reach potential Catalyst participants was through the networks of previous participants, along with recruitment efforts involving local partners and events. DOE also reported that the 60-day period provided for participants to develop their prototypes was challenging, and that the department should consider adding time to that phase of the competition. DOE also compared the cost and time for product development under the Catalyst approach to those supported through DOE’s traditional financial assistance awards, including cooperative agreements and grants. According to DOE officials, the agency had learned through earlier efforts to engage developers that the application process for traditional funding opportunities can create a barrier for those who may not be interested in or able to go through what can be seen as an extensive review and approval process. DOE wanted to use Catalyst to test a faster, more open way of engaging developers and entrepreneurs. Through its assessment, DOE found that, under a traditional funding opportunity, it typically takes 9 months to move from the announcement of the opportunity to the award being made, with minimum awards ranging from $300,000 to $500,000 for software or applications. By contrast, for Catalyst, this process was completed in 3 months, with $25,000 prizes awarded to rapidly test and validate prototypes. Given the time and resources that agencies may invest to build or enhance communities of partners and participants for open innovation initiatives, agencies can take steps to sustain these connections over time. This is particularly important if one purpose of the initiative is to build a new, or bring greater coherence to an existing, community of interested organizations and individuals to work together on an issue. However, this may be less applicable when an initiative is discrete in scope and intended to be a one-time occurrence. Seek to maintain communication w ith, and promote communication among, members of the community. According to relevant literature and agency officials, agencies should acknowledge and, where appropriate, reward the efforts and achievements of partners and participants so that they feel their contributions are valued and appreciated. This can be done in conjunction with reporting the results of and lessons learned from the initiative, or through separate venues such as announcements, award ceremonies, or recognition on the initiative website. As part of this effort, it is also important for agencies to explain how the contributions of partners and participants helped the agency achieve, or progress toward, its goals, and to communicate the next steps that will be taken following an initiative. Relevant literature and experts we consulted also highlighted that agencies can seek ways to maintain communication with members of the community to keep them informed of future initiatives and other opportunities of interest, and facilitate communication within the community. To ensure these activities receive sufficient attention over time, an agency may need to assign staff the responsibility of maintaining contact with these communities. Efforts to sustain a community over time can help enhance collaboration to continue progress on addressing an issue, and provide the agency with a network that could be more easily mobilized again for future initiatives. At some point, these communities may become self-sustaining, with members continuing to collaborate with little or no involvement from the agency. To illustrate how agencies have built and sustained communities of interested partners and participants by implementing open innovation initiatives, we provide the following three examples from EPA, NASA, and HHS. From 2012 to 2015, EPA’s Office of Research and Development held a series of air pollution sensor workshops that were, according to EPA officials, designed to better understand the needs of governments and community groups interested in using these sensors, and to build a more coherent community of users and developers. EPA held the first workshop in March 2012. It provided a forum for the exchange of ideas and collaboration among people who use and research air pollution sensors to learn from their successes and challenges. Seventy people representing federal agencies, state and local governments, academia, private industry, and community-based organizations attended the workshop. According to EPA officials, workshop attendees agreed that it was helpful to have EPA convene these groups so that they could learn from each other, and build greater trust and understanding through collaboration and communication. Subsequent workshops held from 2013 to 2015 focused on specific issues, including data quality, citizen science, and community-based monitoring. In addition to in-person attendance, the workshops were also broadcast as webinars to allow those unable to attend in person to participate. Each year, there was increased interest in the workshops. More than 800 people participated in the 2015 workshop, both in person and via the webinar. The 2015 event, which was used to provide training on how to conduct community air monitoring, was, according to EPA officials, designed to build on the three previous annual workshops, whose participants requested more hands-on training opportunities. EPA officials reported that these regular workshops helped sustain this growing community, providing opportunities to build partnerships and identify and address stakeholder needs. In addition to these workshops, EPA officials continue to share information and resources to keep individuals in the community engaged in efforts to develop and deploy improved air sensors. For example, after the 2015 workshop, EPA officials told us that they hosted regular follow- up conference calls with 30 in-person attendees chosen because of their involvement in air-monitoring projects in local communities. EPA officials also said they periodically e-mail past workshop participants to inform them about webinars, funding opportunities, and other items of interest. In addition, according to documentation from EPA, officials have regularly given presentations to stakeholder groups on community air monitoring. EPA has also made resources available to support sensor developers and citizen scientists, establishing a sensor technology testing program to provide feedback to developers and users, and an online “Air Sensor Toolbox” that offers training videos and answers to frequently asked questions about community air monitoring. According to EPA officials, these efforts to build and sustain the community interested in air sensor technology have contributed directly to EPA’s strategic goal to improve air quality. Prior to the series of workshops, EPA had little in terms of ongoing research related to the development, testing, and use of air sensor technology. The workshops identified a need for, and inspired, a concerted research effort to identify promising technologies, evaluate technology performance in field and laboratory tests, and explore the use of these new technologies and the data they produce. Each year since 2012, NASA has held an annual 2-day event called the International Space Apps Challenge. At this event, teams of scientists, developers and students use publicly-available data to design solutions to identified challenges. According to NASA’s report on the 2015 event, Space Apps is used to make the agency’s open data and assets available to the public, with the aim of giving people new ways to produce relevant open-source solutions to global challenges. According to a NASA report on the Space Apps Challenge and a NASA official, beginning with local Space Apps events in 25 cities across the world in 2012, each year the number of local events has increased. In 2016, local events were held in 161 locations spanning 61 countries. NASA relies on local volunteer hosts to secure venues, manage logistics, and promote the events. Given the importance of sustaining relationships with those at the local level experienced in hosting these events, NASA has provided tools to help ensure local hosts have a positive experience. For example, NASA created a toolkit that provides prospective hosts with practical advice, guidelines, and best practices for hosting a local event. According to the NASA Space Apps report, three months prior to the event, NASA staff begins to actively engage with those organizing local events. They provide weekly suggestions, reminders, and resources to help hosts plan and manage their local events. NASA staff also convenes periodic planning conference calls with local hosts to communicate new information and answer questions. According to a NASA official, actively engaging through planning calls makes a significant difference. New hosts can ask questions of, and learn from, experienced hosts and NASA staff. It also allows local hosts to share ideas and advice with one another. This official also stated that, by engaging with the community, NASA can learn more about the support that local hosts need and collect their suggestions. For example, rather than having one planning call each week, NASA holds three different calls to accommodate the varying schedules of local hosts in different time zones. According to the NASA Space Apps report, after the completion of each year’s event, NASA also acknowledges and honors hosts and winning participants by recognizing them on the Space Apps website, in public reports, and through other venues, like invitations to launches. Figure 9 provides an excerpt from the website used to acknowledge finalists and winners from previous challenges. According to a NASA official, all of these elements combined have helped them maintain a strong level of involvement by hosts at the local level. For example, she said 78 percent of the hosts for 2016 local events had returned after hosting events in previous years. She also stated that, by regularly engaging with a community of people using the agency’s data, Space Apps has helped NASA meet its open data goals and mandates. For example, through feedback from Space Apps participants, NASA officials learned how difficult it could be to use the agency’s open data. This led to action to improve the usefulness of the datasets and house them in one location to make them more accessible. NASA also used feedback from Space Apps participants to redesign the agency’s websites and make it easier for visitors to understand and use NASA data. According to FDA officials, one of the key goals of the OpenFDA initiative is to build an open community of users around FDA data. OpenFDA developers emphasized the importance of direct contact with, and feedback from, external users. However, due to resource limitations, the developers knew it would be difficult to actively monitor online feedback boards and regularly address individual questions or requests. They wanted to create infrastructure to both support users and make the community somewhat self-sustaining. According to FDA officials, they believed that an engaged community would help provide resources and assist new users. According to an August 2014 report from Iodine, the private health data company that assisted FDA with the development of OpenFDA, the infrastructure that FDA put in place relies upon two online forums, StackExchange and GitHub. These forums facilitate communication and information sharing among members of the community. They allow developers and researchers who use OpenFDA to ask questions of the broader community of users, and get answers to those questions. This allows lessons learned and insights to be spread among the community. According to FDA officials, these forums also allow users to recommend fixes to problems with, and make improvements to, the OpenFDA source code. According to data available on the StackExchange and GitHub websites, both forums have been actively used. For example, since OpenFDA’s June 2014 launch, members of the community of users have submitted more than 100 questions on the StackExchange forum. Nearly 90 percent of those questions have been answered by other members of the community. According to an FDA official, since OpenFDA’s launch GitHub has also been used to identify nearly 50 issues with the OpenFDA platform. As of June 2016, 39 of those issues have been addressed. We provided a draft of the report to the Office of Management and Budget, the Office of Science and Technology Policy, the General Services Administration, the Departments of Energy, Health and Human Services, Housing and Urban Development, and Transportation, the Environmental Protection Agency, and the National Aeronautics and Space Administration for comment. These nine agencies provided responses via emails transmitted between September 16 and September 27, 2016. All nine agencies concurred with the findings of the report. In its response, provided in an email from the OSTP General Counsel transmitted on September 22, 2016, OSTP raised a concern that the report does not include an example of an initiative that only involved the use of citizen science. Our primary objective for this report was to identify, and illustrate through selected agency examples, practices that promote the effective implementation of open innovation strategies. Therefore, our focus was on selecting those initiatives with the greatest potential to illustrate aspects of these practices. In making those selections, we ensured that the sample covered the five types of open innovation strategies frequently used by federal agencies. Although we did not include an initiative that only used citizen science, we included initiatives that involved the use of citizen science in combination with other strategies, such as NASA’s Asteroid Data Hunter initiative and EPA’s efforts to encourage the use of air pollution sensors. In addition, DOE, HHS, HUD, NASA, and OSTP provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to interested congressional committees, the heads of the agencies identified above and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III. The GPRA Modernization Act of 2010 includes a provision for us to periodically review how implementation of its requirements is affecting agency performance. This report is part of our response to that mandate. Our specific objective for this report is to identify, and illustrate through selected agency examples, practices that facilitate the effective implementation of open innovation strategies and the effects, if any, the use of those strategies have had on agency performance and opportunities for citizen engagement. To identify the various open innovation strategies federal agencies have used to facilitate participation by, and collaboration with, citizens and other non-profit, academic, and private sector partners, we reviewed documents, reports, and resources from the Office of Management and Budget (OMB), the Office of Science and Technology Policy (OSTP), and the General Services Administration (GSA), and analyzed the Open Government Plans of federal agencies. Through our review of these reports, and the most recent open government plans from 35 agencies, we identified 5 open innovation strategies that agencies have frequently used to engage citizens and external stakeholders. To identify practices that can facilitate the effective implementation of open innovation strategies, we analyzed and synthesized information gathered from a number of different sources. First, we collected relevant federal resources, including guidance with suggested practices for implementing various open innovation strategies developed by OMB, OSTP, and GSA. Through a literature review of relevant publications from public and business administration journals, and research organizations we identified those with suggested practices for the design and implementation of open innovation initiatives in the public sector. We then analyzed and synthesized suggested practices in these sources to identify areas of commonality between them. We interviewed 14 open innovation experts with experience in implementing open innovation initiatives or with academic or consultative expertise in this area. We also interviewed officials involved in implementing open innovation initiatives at six selected agencies, as well as staff from OMB, OSTP, and GSA. We initially selected and interviewed experts based on the results of our literature review (e.g., the authors of relevant articles or books with suggested practices for the design and implementation of open innovation initiatives). Based on suggestions from those individuals, we expanded our list of experts and conducted additional reviews. Through our analysis and expert interviews, we developed a broad set of practices that facilitate the effective implementation of open innovation initiatives. We refined the list of practices through our audit work at selected agencies (see below); reviewing our body of work on performance management and collaboration; and incorporating feedback from the open innovation experts we had previously interviewed, and from knowledgeable federal officials at OSTP, GSA, and other agencies. To illustrate how actions that selected agencies have taken to carry out open innovation initiatives have reflected effective practices, and the effects the application of these practices had on agency performance and citizen engagement, we selected six agencies for more in-depth review: the Departments of Energy, Health and Human Services, Housing and Urban Development, and Transportation (DOT); the Environmental Protection Agency; and the National Aeronautics and Space Administration. We selected these agencies based on several criteria, including the number and variety of open innovation strategies outlined in their individual agency open government plans. These selections were also in line with suggestions we independently obtained from knowledgeable staff at OMB, OSTP, and GSA that were familiar with agencies that have actively used such strategies. We also identified and selected 15 specific open innovation initiatives led by these 6 agencies which would allow us to illustrate how these agencies have applied effective practices for implementing open innovation initiatives. We selected these initiatives based on our review of the open government plans for the 6 selected agencies, and of OSTP reports on the implementation of prize competitions and challenges. Suggestions from knowledgeable agency staff also contributed to our selection process. These initiatives are listed below in table 4. At these agencies, we reviewed relevant agency documents and interviewed knowledgeable agency officials responsible for designing and implementing these selected initiatives. We asked these officials how they defined goals and selected specific strategies, how they designed and implemented their initiatives, and what steps they took to collect data and assess results. These interviews allowed us to capture detailed illustrations showing how agencies took actions that reflect aspects of effective practices in the implementation of their initiatives. The scope of this review was to identify practices for the effective implementation of open innovation initiatives, and to describe actions agencies took in carrying out open innovation initiatives that reflect aspects of those practices. While we present information on the implementation of agency open innovation initiatives, we did not assess the success of the underlying agency programs and activities that these initiatives were designed to support. For example, while we examined the implementation of DOT’s open dialogues on freight transportation, we have ongoing work reviewing various DOT activities related to issues mentioned in the draft National Freight Strategic Plan and have not evaluated the plan nor determined its effectiveness in helping DOT meet its freight goals. We conducted this performance audit from July 2015 to October 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Based on our review of agency open government plans and other sources, we found that agencies have frequently used the five open innovation strategies below to collaborate with citizens and external parties, and encourage their participation in agency efforts. Crowdsourcing and Citizen Science. In crowdsourcing, agencies submit an open call, generally through the Internet, for voluntary assistance from a large group of individuals to complete defined tasks. This can help the agency complete projects, such as transcribing large numbers of historical documents, while also producing usable products that benefit the broader community, like searchable databases. Similarly, agencies can use citizen science to encourage members of the public to voluntarily assist with science- related tasks. Such tasks can include conducting experiments, making observations, collecting and analyzing data, and interpreting results. This can supplement an agency’s own data collection efforts. It also allows agencies to study complex issues by conducting research at large geographic scales and over long periods of time in ways that professional scientists working alone cannot easily duplicate. Idea Generation (Ideation). In idea generation, or ideation, an agency asks participants to submit ideas to address a specific issue or problem, and may allow them to provide comments on ideas submitted by other participants, and vote to express their support for an idea. Open Data Collaboration. In open data collaboration, an agency mobilizes participants to share, explore and analyze publicly-available data sets. Examples of open data collaboration may include using open data to conduct research, design data visualizations, or create web and mobile applications and websites that help people access and use the data. Participants can also be mobilized through in- person or online events, often referred to as “data jams” or “hackathons,” or through websites that provide access to open data and facilitate ongoing communication. Open Dialogue. In an open dialogue, an agency collects and responds to information, observations, and perspectives provided by a range of citizens and other external experts and stakeholders. They can do this using online tools, including websites or interactive webinars, and in-person meetings or forums. The agency can also use open dialogues to request input and suggestions on a set of options under consideration, and to better understand the values, perspectives, and preferences of citizens and stakeholders. Prize Competition or Challenge. When an agency identifies a problem to solve or a specific goal it wants to achieve with the assistance of members of the public, it can hold a prize competition or challenge. In a competition or challenge, the agency invites interested members of the public to submit potential solutions to this problem or challenge. The agency then evaluates these proposals and provides a monetary or non-monetary award for those that meet specific criteria and are selected as winners. In addition to the contact named above, Benjamin T. Licht (Assistant Director) and Adam Miles supervised the development of this report. Theodore Alexander, Joyce Y. Kang, Steven Putansu, Lauren Shaman, Erik Shive, Wesley Sholtes, and Andrew J. Stephens made significant contributions to this report. Sarah Gilliland, Robert Robinson and Stewart Small also made key contributions. Shea Bader, Giny Cheong, Jeffrey DeMarco, Alexandra Edwards, Anthony Patterson, and Timothy Shaw verified the information in the report. Ballentyne, Perrie. Challenge Prizes: A Practice Guide. United Kingdom: Nesta, 2014. Brabham, Daren C. Crowdsourcing in the Public Sector. Georgetown Digital Shorts. Washington, D.C.: Georgetown University Press, 2015. Brabham, Daren C. Using Crowdsourcing in Government. Washington, D.C.: IBM Center for the Business of Government Collaborating across Boundaries Series, 2013. Department of Health and Human Services IDEA Lab. The HHS COMPETES Playbook, accessed on January 12, 2016, http://www.hhs.gov/idealab/what-we-do/hhs-competes. Desouza, Kevin. Challenge.gov: Using Competitions and Awards to Spur Innovation. Washington, D.C.: IBM Center for the Business of Government Using Technology Series, 2012. Eggers, William D. and Paul MacMillan. “A Billion to One: The Crowd Gets Personal.” United Kingdom: Deloitte Review issue 16 (2015). Federal Public Participation Working Group. U.S. Public Participation Playbook, 2015, accessed on July 27, 2015, https://participation.usa.gov. Goldhammer, Jesse, Kwasi Mitchell, Anesa “Nes” Parker, Brad Anderson, and Sahil Joshi. “The Craft of Incentive Prize Design: Lessons from the Public Sector.” Deloitte University Press, June 2014. Kannan, P. K. and Ai-Mei Chang. Beyond Citizen Engagement: Involving the Public in Co-Delivering Government Services. Washington, D.C.: IBM Center for the Business of Government Collaborating across Boundaries Series, 2013. King, Andrew and Karim R. Lakhani. “Using Open Innovation to Identify the Best Ideas.” MIT Sloan Management Review, September 11, 2013. Lee, Gwanhoo. Federal Ideation Programs: Challenges and Best Practices. Washington, D.C.: IBM Center for the Business of Government Using Technology Series, 2013. Lee, Gwanhoo and Young Hoon Kwak. An Open Government Implementation Model: Moving to Increased Public Engagement. Washington, D.C.: IBM Center for the Business of Government Using Technology Series, 2011. Luciano, Kay. Managing Innovation Prizes in Government. Washington, D.C.: BM Center for the Business of Government Collaborating across Boundaries Series, 2011. Lukensmeyer, Carolyn J., Joe Goldman, and David Stern. Assessing Public Participation in an Open Government Era: A Review of Federal Agency Plans. Washington, D.C.: IBM Center for the Business of Government Fostering Transparency and Democracy Series, 2011. McKinsey & Company. “And the Winner is…” Capturing the Promise of Philanthropic Prizes. McKinsey & Company, July 2009. Mergel, Ines. “Opening Government: Designing Open Innovation Processes to Collaborate with External Problem Solvers.” Social Science Computer Review vol. 33, no. 5 (2015): 599-612. Mergel, Ines and Kevin Desouza. “Implementing Open Innovation in the Public Sector: The Case of Challenge.gov.” Public Administration Review vol. 73, no. 6 (November/December 2013): 882–890. Nabatchi, Tina and Matt Leighninger. “Participation Scenarios and Tactics,” Public Participation for 21st Century Democracy. Hoboken, NJ: 2015, 241–285. Nambisan, Satish. Transforming Government through Collaborative Innovation. Washington, D.C.: IBM Center for the Business of Government Innovation Series, 2008. Nambisan, Satish and Priya Nambisan. Engaging Citizens in Co-Creation in Public Services: Lessons Learned and Best Practices. Washington, D.C.: IBM Center for the Business of Government Collaborating across Boundaries Series, 2013. Noveck, Beth Simone. Smart Citizens, Smarter State: The Technologies of Expertise and the Future of Governing. Cambridge, MA: Harvard University Press, 2015. Office of Management and Budget, Executive Office of the President of the United States. The Common Approach to Federal Enterprise Architecture. Washington, D.C.: May 2, 2012. Office of Science and Technology Policy, General Services Administration, and Federal Crowdsourcing and Citizen Science Community of Practice. “Federal Crowdsourcing and Citizen Science Toolkit,” adaptation of Bonney et al., “Citizen Science: A Developing Tool for Expanding Science Knowledge and Scientific Literacy.” BioScience 59(11), 977-984 (2009), accessed on January 26, 2016. https://crowdsourcing-toolkit.sites.usa.gov/howto. Tong, Raymond and Karim R. Lakhani. Public-Private Partnerships for Organizing and Executing Prize-Based Competitions, Research Publication no. 2012-13. Cambridge, MA: The Berkman Center for Internet and Society at Harvard University, June 2012.
To address the complex and crosscutting challenges facing the federal government, agencies need to effectively engage and collaborate with those in the private, nonprofit, and academic sectors, other levels of government, and citizens. Agencies are increasingly using open innovation strategies for these purposes. The GPRA Modernization Act of 2010 (GPRAMA) requires federal agencies to identify strategies and resources they will use to achieve their goals. GPRAMA also requires GAO to periodically review how implementation of its requirements is affecting agency performance. This report identifies and illustrates practices that help agencies effectively implement open innovation strategies, and how the use of those strategies has affected agency performance and opportunities for citizen engagement. To identify these practices, GAO analyzed relevant federal guidance and academic literature, and interviewed open innovation experts. To refine and illustrate the practices, GAO reviewed documents and interviewed officials from the Office of Management and Budget, Office of Science and Technology Policy, General Services Administration, and six selected federal agencies. GAO selected the agencies and a sample of their initiatives based on several factors, including the number and type of initiatives outlined in their Open Government Plans. Open innovation involves using various tools and approaches to harness the ideas, expertise, and resources of those outside an organization to address an issue or achieve specific goals. GAO found that federal agencies have frequently used five open innovation strategies to collaborate with citizens and external stakeholders, and encourage their participation in agency initiatives. GAO identified seven practices that agencies can use to effectively implement initiatives that involve the use of these strategies: Select the strategy appropriate for the purpose of engaging the public and the agency’s capabilities. Clearly define specific goals and performance measures for the initiative. Identify and engage external stakeholders and potential partners. Develop plans for implementing the initiative and recruiting participants. Engage participants and partners while implementing the initiative. Collect and assess relevant data and report results. Sustain communities of interested partners and participants. Aspects of these practices are illustrated by the 15 open innovation initiatives GAO reviewed at six selected agencies: the Departments of Energy, Health and Human Services, Housing and Urban Development, and Transportation (DOT); the Environmental Protection Agency; and the National Aeronautics and Space Administration (NASA). For example: With the Asteroid Data Hunter challenge, NASA used a challenge and citizen science effort, beginning in 2014, to improve the accuracy of its asteroid detection program and develop an application for citizen scientists. Since 2009, DOT’s Federal Highway Administration has used an ideation initiative called Every Day Counts to identify innovations to improve highway project delivery. Teams of federal, state, local, and industry experts then implement the ideas chosen through this process.
Under the defined standard benefit in 2009, beneficiaries subject to full cost-sharing amounts paid out-of-pocket costs during the initial coverage period that included a deductible equal to the first $295 in drug costs, followed by 25 percent coinsurance for all drugs until total drug costs reached $2,700, with beneficiary out-of-pocket costs accounting for $896.25 of that total. (See fig. 1.) This initial coverage period is followed by a coverage gap—the so-called doughnut hole—in which these beneficiaries paid 100 percent of their drug costs. In 2009, the coverage gap lasted until total drug costs—including the costs accrued during the initial coverage period—reached $6,153.75, with beneficiary out-of-pocket drug costs accounting for $4,350 of that total. This point is referred to as the catastrophic coverage threshold. After reaching the catastrophic coverage threshold, beneficiaries taking a specialty tier-eligible drug paid 5 percent of total drug costs for each prescription for the remainder of the year. In addition to cost sharing for prescription drugs, many Part D plans also charge a monthly premium. In 2009, premiums across all Part D plans averaged about $31 per month, an increase of 24 percent from 2008. Beneficiaries are responsible for paying these premiums except in the case of LIS beneficiaries, whose premiums are subsidized by Medicare. We found that specialty tier-eligible drugs accounted for about 10 percent, or $5.6 billion, of the $54.4 billion in total prescription drug spending under Part D MA-PD and PDP plans in 2007. Prescriptions for LIS beneficiaries accounted for about 70 percent, or about $4.0 billion, of the $5.6 billion spent on specialty tier-eligible drugs under MA-PD and PDP plans that year. (See fig. 2.) The fact that spending on specialty tier-eligible drugs in 2007 was largely accounted for by LIS beneficiaries is noteworthy because e their cost sharing is largely paid by Medicare. their cost sharing is largely paid by Medicare. While only 8 percent of Part D beneficiaries in MA-PD and PDP plans who filed claims but did not use any specialty tier-eligible drugs reached the catastrophic coverage threshold of the Part D benefit in 2007, 55 percent of beneficiaries who used at least one specialty tier-eligible drug reached the threshold. Specifically, among those beneficiaries who used at least one specialty tier-eligible drug in 2007, 31 percent of beneficiaries responsible for paying the full cost sharing required by their plans and 67 percent of beneficiaries whose costs were subsidized by Medicare through the LIS reached the catastrophic coverage threshold. Most (62 percent) of the $5.6 billion in total Part D spending on specialty tier- eligible drugs under MA-PD and PDP plans occurred after beneficiaries reached the catastrophic coverage phase of the Part D benefit. For most beneficiaries—those who are responsible for paying the full cost- sharing amounts required by their plans—who use a given specialty tier- eligible drug, different cost-sharing structures can be expected to result in varying out-of-pocket costs during the benefit’s initial coverage period. However, as long as beneficiaries reach the catastrophic coverage threshold in a calendar year—as 31 percent of beneficiaries who used at least one specialty tier-eligible drug and who were responsible for the full cost-sharing amounts did in 2007—their annual out-of-pocket costs for that drug are likely to be similar regardless of their plans’ cost-sharing structures. During the initial coverage period, the estimated out-of-pocket costs for these beneficiaries for a given specialty tier-eligible drug are likely to vary, because some Part D plans may place the drug on a tier with coinsurance while other plans may require a flat copayment for the drug. For example, estimated 2009 out-of-pocket costs during the initial coverage period, excluding any deductibles, for a drug with a monthly negotiated price of $1,100 would range from $25 per month for a plan with a flat $25 monthly copayment to $363 per month for a plan with a 33 percent coinsurance rate. However, even if beneficiaries pay different out-of-pocket costs during the initial coverage period, their out-of-pocket costs become similar due to the coverage gap and the fixed catastrophic coverage threshold ($4,350 in out- of-pocket costs in 2009). (See fig. 3.) There are several reasons for this. First, beneficiaries taking equally priced drugs will reach the coverage gap at the same time—even with different cost-sharing structures—because entry into the coverage gap is based on total drug costs paid by the beneficiary and the plan, rather than on out-of-pocket costs paid by the beneficiary. Since specialty tier-eligible drugs have high total drug costs, beneficiaries will typically reach the coverage gap within 3 months in the same calendar year. Second, during the coverage gap, beneficiaries typically pay 100 percent of their total drug costs until they reach the catastrophic coverage threshold. This threshold ($4,350 in out-of-pocket costs) includes costs paid by the beneficiary during the initial coverage period. Therefore, beneficiaries who paid higher out-of-pocket costs in the initial coverage period had less to pay in the coverage gap before they reached the threshold. Conversely, beneficiaries who paid lower out-of- pocket costs in the initial coverage period had more to pay in the coverage gap before they reached the same threshold of $4,350 in out-of-pocket costs. Third, after reaching the threshold, beneficiaries’ out-of-pocket costs become similar because they typically pay 5 percent of the drug’s negotiated price for the remainder of the calendar year. For most beneficiaries—those who are responsible for paying the full cost- sharing amounts required by their plans—variations in negotiated drug prices affect out-of-pocket costs during the initial coverage phase if their plans require them to pay coinsurance. All 35 of our selected plans required beneficiaries to pay coinsurance in 2009 for at least some of the 20 specialty tier-eligible drugs in our sample. Additionally, negotiated drug prices will affect these beneficiaries’ out-of-pocket costs during the coverage gap and the catastrophic coverage phase because beneficiaries generally pay the entire negotiated price of a drug during the coverage gap and pay 5 percent of a drug’s negotiated price during the catastrophic coverage phase. As the following examples illustrate, there are variations in negotiated prices between drugs, across plans for the same drug, and from year to year. Variations between drugs: In 2009—across our sample of 35 plans— beneficiaries who took the cancer drug Gleevec for the entire year could have been expected to pay about $6,300 out of pocket because Gleevec had an average negotiated price of about $45,500 per year, while beneficiaries could have been expected to pay about $10,500 out of pocket over the entire year if they took the Gaucher disease drug Zavesca, which had an average negotiated price of about $130,000 per year. Variations across plans: In 2009, the negotiated price for the human immunodeficiency virus (HIV) drug Truvada varied from about $10,900 to about $11,400 per year across different plans with a 33 percent coinsurance rate, resulting in out-of-pocket costs that could be expected to range from about $4,600 to $4,850 for beneficiaries taking the drug over the entire year. Variations over time: Since 2006, average negotiated prices for the specialty tier-eligible drugs in our sample have risen across our sample of plans; the increases averaged 36 percent over the 3-year period. These increases, in turn, led to higher estimated beneficiary out-of-pocket costs for these drugs in 2009 compared to 2006. For example, the average negotiated price for a 1-year supply of Gleevec across our sample of plans increased by 46 percent, from about $31,200 in 2006 to about $45,500 in 2009. Correspondingly, the average out-of-pocket cost for a beneficiary taking Gleevec for an entire year could have been expected to rise from about $4,900 in 2006 to more than $6,300 in 2009. The eight Part D plan sponsors we interviewed told us that they have little leverage in negotiating price concessions for most specialty tier-eligible drugs. Additionally, all seven of the plan sponsors we surveyed reported that they were unable to obtain price concessions from manufacturers on 8 of the 20 specialty tier-eligible drugs in our sample between 2006 and 2008. For most of the remaining 12 drugs in our sample, plan sponsors who were able to negotiate price concessions reported that they were only able to obtain price concessions that averaged 10 percent or less, when weighted by utilization, between 2006 and 2008. (See app. I for an excerpt of the price concession data presented in our January 2010 report.) The plan sponsors we interviewed cited three main reasons why they have typically had a limited ability to negotiate price concessions for specialty tier-eligible drugs. First, they stated that pharmaceutical manufacturers have little incentive to offer price concessions when a given drug has few competitors on the market, as is the case for drugs used to treat cancer. For Gleevec and Tarceva, two drugs in our sample that are used to treat certain types of cancer, plan sponsors reported that they were not able to negotiate any price concessions between 2006 and 2008. In contrast, plan sponsors told us that they were more often able to negotiate price concessions for drugs in classes where there are more competing drugs on the market—such as for drugs used to treat rheumatoid arthritis, multiple sclerosis, and anemia. The anemia drug Procrit was the only drug in our sample for which all of the plan sponsors we surveyed reported that they were able to obtain price concessions each year between 2006 and 2008. Second, plan sponsors told us that even when there are competing drugs, CMS may require plans to include all or most drugs in a therapeutic class on their formularies, and such requirements limit the leverage a plan sponsor has when negotiating price concessions. When negotiating price concessions with pharmaceutical manufacturers, the ability to exclude a drug from a plan’s formulary in favor of a therapeutic alternative is often a significant source of leverage available to a plan sponsor. However, many specialty tier-eligible drugs belong to one of the six classes of clinical concern for which CMS requires Part D plan sponsors to include all or substantially all drugs on their formularies, eliminating formulary exclusion as a source of negotiating leverage. We found that specialty tier-eligible drugs were more than twice as likely to be in one of the six classes of clinical concern compared with lower-cost drugs in 2009. Additionally, among the 8 drugs in our sample of 20 specialty tier-eligible drugs for which the plan sponsors we surveyed reported they were unable to obtain price concessions between 2006 and 2008, 4 drugs were in one of the six classes of clinical concern. Plan sponsors are also required to include at least two therapeutic alternatives from each of the other therapeutic classes on their formularies. Third, plan sponsors told us that they have limited ability to negotiate price concessions for certain specialty tier-eligible drugs because they account for a relatively limited share of total prescription drug utilization among Part D beneficiaries. For some drugs in our sample, such as Zavesca, a drug used to treat a rare enzyme disorder called Gaucher disease, the plan sponsors we surveyed had very few beneficiary claims between 2006 and 2008. None of the plan sponsors we surveyed reported price concessions for this drug during this period. Plan sponsors told us that utilization volume is usually a source of leverage when negotiating price concessions with manufacturers for Part D drugs. For some specialty tier-eligible drugs like Zavesca, however, the total number of individuals using the drug may be so limited that plans are not able to enroll a significant enough share of the total users to entice the manufacturer to offer a price concession. The Department of Health and Human Services (HHS) provided us with CMS’s written comments on a draft version of our January 2010 report. CMS agreed with portions of our findings and suggested additional information for us to include in our report. We also provided excerpts of the draft report to the eight plan sponsors who were interviewed for this study and they provided technical comments. We incorporated comments from CMS and the plan sponsors as appropriate in our January 2010 report. Mr. Chairman, this completes my prepared remarks. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information about this statement, please contact John E. Dicken at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement in addition to the contact listed above were Will Simerl, Assistant Director; Krister Friday; Karen Howard; Gay Hee Lee; and Alexis MacDonald. Number of plan sponsors that obtained price price concessions, weighted by utilization (dollars) Drugs (including strength and dosage form), by indication utilization (dollars) Inflammatory conditions (e.g., rheumatoid arthritis, psoriasis, Crohn’s disease) Human immunodeficiency virus (HIV) Drugs (including strength and dosage form), by indication price concessions, weighted by utilization (dollars) utilization (dollars) Enzyme disorders (e.g., Gaucher disease) Other (selected based on high utilization) One of the seven plan sponsors we surveyed did not submit any data for this drug. Therefore, values listed for this drug are based on data submitted by six plan sponsors, rather than seven plan sponsors. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Centers for Medicare & Medicaid Services (CMS) allows Part D plans to utilize different tiers with different levels of cost sharing as a way of managing drug utilization and spending. One such tier, the specialty tier, is designed for high-cost drugs whose prices exceed a certain threshold set by CMS. Beneficiaries who use these drugs typically face higher out-of-pocket costs than beneficiaries who use only lower-cost drugs. This testimony is based on GAO's January 2010 report entitled Medicare Part D: Spending, Beneficiary Cost Sharing, and Cost-Containment Efforts for High-Cost Drugs Eligible for a Specialty Tier (GAO-10-242) in which GAO examined, among other things, (1) Part D spending on these drugs in 2007, the most recent year for which claims data were available; (2) how different cost-sharing structures could be expected to affect beneficiary out-of-pocket costs; (3) how negotiated drug prices could be expected to affect beneficiary out-of-pocket costs; and (4) information Part D plan sponsors reported on their ability to negotiate price concessions. For the second and third of these objectives, this testimony focuses on out-of-pocket costs for beneficiaries responsible for paying the full cost-sharing amounts required by their plans. GAO examined CMS data and interviewed officials from CMS and 8 of the 11 largest plan sponsors, based on enrollment in 2008. Seven of the 11 plan sponsors provided price concession data for a sample of 20 drugs for 2006 through 2008. High-cost drugs eligible for a specialty tier commonly include immunosuppressant drugs, those used to treat cancer, and antiviral drugs. Specialty tier-eligible drugs accounted for 10 percent, or $5.6 billion, of the $54.4 billion in total prescription drug spending under Medicare Part D plans in 2007. Medicare beneficiaries who received a low-income subsidy (LIS) accounted for most of the spending on specialty tier-eligible drugs-- $4.0 billion, or 70 percent of the total. Among all beneficiaries who used at least one specialty tier-eligible drug in 2007, 55 percent reached the catastrophic coverage threshold, after which Medicare pays at least 80 percent of all drug costs. In contrast, only 8 percent of all Part D beneficiaries who filed claims but did not use any specialty tier-eligible drugs reached this threshold in 2007. Most beneficiaries are responsible for paying the full cost-sharing amounts required by their plans. For such beneficiaries who use a given specialty tier-eligible drug, different cost-sharing structures result in varying out-of-pocket costs only until they reach the catastrophic coverage threshold, which 31 percent of these beneficiaries did in 2007. After that point, beneficiaries' annual out-of-pocket costs for a given drug are likely to be similar regardless of their plans' cost-sharing structures. Variations in negotiated drug prices can also affect out-of-pocket costs for beneficiaries who are responsible for paying the full cost-sharing amounts required by their plans. Variations in negotiated prices can occur between drugs, across plans for the same drug, and from year to year. For example, the average negotiated price for the cancer drug Gleevec across our sample of plans increased by 46 percent between 2006 and 2009, from about $31,200 per year to about $45,500 per year. Correspondingly, the average out-of-pocket cost for a beneficiary taking Gleevec for the entire year could have been expected to rise from about $4,900 in 2006 to more than $6,300 in 2009. Plan sponsors reported having little leverage to negotiate price concessions from manufacturers for most specialty tier-eligible drugs. One reason for this limited leverage was that many of these drugs have few competitors on the market. Plan sponsors reported that they were more often able to negotiate price concessions for drugs with more competitors on the market--such as for drugs used to treat rheumatoid arthritis. Two additional reasons cited for limited negotiating leverage were CMS requirements that plans include all or most drugs from certain therapeutic classes on their formularies, limiting sponsors' ability to exclude drugs from their formularies in favor of competing drugs; and that the relatively limited share of total prescription drug utilization among Part D beneficiaries for some specialty tier-eligible drugs was insufficient to entice manufacturers to offer price concessions. CMS provided GAO with comments on a draft of the January 2010 report. CMS agreed with portions of GAO's findings and suggested additional information for GAO to include in the report, which GAO incorporated as appropriate.
The Chinese Ministry of Education requires that U.S. universities seeking to establish an education arrangement in China partner with a Chinese university. The Chinese government also requires the U.S. universities to establish written agreements with the Chinese university defining the academics, governance, operations, finances, and other key aspects of the arrangement. The Chinese Ministry of Education reviews each U.S. university’s written agreement along with other application materials and authorizes approved universities and their Chinese partners to establish: Cooperative education institutions: degree-granting institutions that can be granted independent legal status. Cooperative education programs: activities take various forms and can include joint and dual degree programs. Appendix II provides a complete list of U.S. universities that have been approved to establish cooperative education institutions and cooperative education programs in China. As figure 1 shows, the number of U.S. universities that have partnered with Chinese universities to establish cooperative, degree-granting institutions has increased since 2011. Just as the number of U.S. universities operating cooperative institutions in China has grown, for the past 6 years China has been the leading country of origin for international college students in the United States, according to the Institute of International Education, with double-digit percentage increases in the past 8 years. In academic year 2014-2015, more than 300,000 students arrived in the United States from China, representing almost a third of all international students in the United States. We reviewed 12 U.S. universities that have partnered with Chinese universities to establish degree-granting institutions in China: six public universities and six private, nonprofit universities. The curriculum at each institution is taught in English, with the exception of foreign language courses. Additional information about these institutions’ students, faculty, and degree offerings is as follows. More than 6,500 total students were enrolled in the 2014-2015 academic year. Enrollment ranged from fewer than 40 to more than 3,000 students across the different universities, with about half of the institutions enrolling between 150 and 900 students. More than 90 percent of these students are Chinese, and less than 6 percent are U.S. citizens. About 60 percent of faculty, in the 2014-2015 academic year, were U.S. citizens, about 16 percent were Chinese, and the remainder were from other countries. Universities include both faculty sent from the U.S. universities’ home campus to teach at the institution in China and faculty hired specifically to teach at the institution in China. The 12 universities offer various undergraduate and graduate degrees in China, including bachelor’s degrees in accounting, business administration, computer science, engineering, English, finance, graphic design, organizational leadership, and supply chain management; master’s degrees in computer graphics and animation, engineering, global health, international studies, management studies, and medical physics; and a doctorate of engineering. Education and State are involved in different aspects of higher education arrangements in China. Education provides federal student loans, grants, and other financial aid each year through programs authorized under Title IV of the Higher Education Act of 1965, as amended. U.S. students in overseas programs are eligible for this federal financial aid under some circumstances, but Chinese and other non-U.S. students are generally not eligible. State supports various international educational grants and scholarships, some of which apply to U.S. students studying in China. In addition, State monitors and reports on human rights conditions around the world. State annually publishes country reports on global human rights practices, including academic freedom. State has reported that China’s human rights record, which has been a long-standing concern, has deteriorated in recent years, with participation in civil society curtailed and academic freedom on Chinese university campuses restricted. State’s Country Reports on Human Rights Practices for 2015 reported that Chinese government officials have instructed professors at Chinese universities to avoid discussing freedom of the press, civil rights and society, and other subjects, and have cautioned universities against using textbooks that promote Western values. In addition, the report notes that the Chinese government has increased efforts to monitor Internet usage and control Internet content, while also taking measures to restrict freedoms of speech, religion, and assembly. The Chinese government regulates the Internet by censoring or restricting access to many websites, including search engines, news outlets, and social media. In April 2016, the Chinese government passed a law regulating the activities of foreign nongovernmental organizations. According to State, this new law requires foreign nongovernmental organizations operating in China to be sponsored by a Chinese organization, to report funding and event information to the Chinese government, and to report directly to the Ministry of Public Security. In a letter to the Chinese government, a group of U.S. universities voiced their concerns regarding a previous draft version of the law, stating that the draft law was ambiguous in defining what types of foreign nongovernmental organizations would be subject to the law. The extent to which the law may impact universities remains to be determined. Academics and researchers, among others, have expressed concerns that, given these conditions, faculty, students, and others at U.S. universities in China may face constraints to their academic freedom and other key freedoms. For the purpose of this report, we define these freedoms as follows: Academic freedom: includes the ability to teach or study what one chooses, ask any questions, or freely express views in the classroom. Freedom of speech: includes the ability to express one’s opinion in print, video, in person, or through other means without interference. Freedom of information: includes the ability to access information and ideas through any medium, including the Internet, libraries, and databases. Freedom of assembly: includes the ability to gather with students and others. Freedom of worship or religion: includes the ability to practice one’s religion, read religious texts, and share one’s beliefs. The 12 U.S. universities we reviewed generally reported receiving support for their institutions in China from their Chinese partner universities and from Chinese government entities, with limited funding from U.S. government agencies and private donors. Universities reported contributions from their partner universities and from Chinese provincial and local governments for land, building construction, and use of campus facilities. Almost all of the universities said their institutions in China generated net revenue or had neutral impact on their budget. The universities we reviewed generally reported receiving material support and funding from their Chinese partner universities or from provincial and local governments to help establish and operate their institutions in China. In interviews and questionnaire responses, most universities reported being granted land, resources for construction of new buildings, and the use of the Chinese university’s campus facilities. The amount of support reported by the universities varied widely and was in some cases substantial. One university reported receiving nearly 500 acres of land and a commitment from the Chinese provincial and local governments to spend about $240 million for construction and development of facilities. The U.S. university said its Chinese partners were covering all direct expenses for the institution, including paying directly for maintenance, capital expenses, faculty and staff salaries, housing subsidies, and travel allowances. According to university administrators, in academic year 2016-2017, the institution in China will begin reimbursing the U.S. university for curriculum development, which university officials said could amount to almost $1 million in the first year. One university stated that 25 percent of the budget for its institution in China came from the government of the city where it was located, including subsidies for Chinese students’ tuition. Some U.S. universities noted that their institutions in China are entirely owned and operated by the Chinese universities, which have assumed financial responsibility, and that the U.S. universities provide primarily academic guidance to the institutions. One of these U.S. universities said its Chinese university partner committed to invest nearly $40 million to construct and equip a new building to house the institution. Another said that all costs incurred by the U.S. university during the institution’s establishment—including faculty and staff time and travel—were covered by the Chinese partner university. In their questionnaire responses, two universities reported receiving financial support from Chinese government entities ranging from $1.5 million to over $15 million. Several other universities described the support provided, including classroom space, campus facilities, and student scholarships, but they did not report its monetary value. Figure 2 shows examples of facilities at the U.S. universities with institutions in China. Three universities also reported receiving nonmaterial support such as guidance and introductions to contacts in the Chinese government. For example, one university said its Chinese partner university provided assistance in obtaining the Chinese government approvals needed to establish the institution. Another university said the provincial government’s education bureau provided advice, introductions, and occasional facilitation support. Finally, two universities said the Chinese national government provided advice and introductions to help establish and maintain their institutions in China. A few universities told us they received funding from private sources in China, such as donors, to operate their institutions in China. For example, one university responding to our questionnaire reported receiving less than $100,000 from private sources in China in academic year 2014- 2015. Another reported that 15 percent of its China institution budget came from private philanthropy and programs for executives but did not specify whether the private sources were U.S. or Chinese. Several universities said they were not aware of the sources of financial aid that their Chinese students may have received. A few universities reported receiving nonmaterial support, such as advice, from Chinese private sources; one university noted that private individuals in China serve on its advisory board, and another said several Chinese companies host students from its institution in China and provide career assistance. Education does not provide funding or guidance to help U.S. universities establish institutions overseas, including in China, but U.S. students may use federal financial aid for their studies in China under some circumstances, according to Education officials. As figure 3 shows, 4 of the 12 universities reported in their questionnaire responses the total amounts of federal financial aid that U.S. students at their institutions in China received in academic year 2014-2015, ranging from $1,800 to about $870,000. One additional university said its U.S. students received federal financial aid, but the university did not report the amount. Most of the remaining universities reported that only Chinese and international students are enrolled, with no U.S. students who could be eligible for federal financial aid. Two of the universities that reported that some of their U.S. students in China received federal financial aid from Education also received funding from the U.S. Agency for International Development and State. The U.S. Agency for International Development’s Office of American Schools and Hospitals Abroad provided funds totaling more than $12.5 million over a number of years to one of the universities for its library, according to agency officials. In addition, State officials said that although they do not provide funding to help U.S. universities establish institutions in China, a small number of their grants may go toward these institutions. For example, State reported that its Gilman program for undergraduates funded five U.S. students to study at a U.S. university institution in China in academic year 2014-2015. In addition, two universities responding to our questionnaire reported that the U.S. embassy and consulates in China provided nonmaterial assistance, including advice, introductions, occasional facilitation support, and career assistance to students. Most U.S. universities we reviewed reported contributing their own funds and resources to establish and operate their institutions in China, such as funds to pay for staff time, travel costs, and legal expenses, and material resources such as classroom equipment. However, although the six public universities we identified as having institutions in China receive ongoing state funding for their domestic campuses, most of these universities told us they require their institutions in China to be self- sufficient and not rely on state government resources. For example, one public university said its board of governors approved the use of state funds to establish the China institution with the understanding that all funds used would be reimbursed, and the university reported that the initial investment has been recaptured several times over. Another reported that its institution was funded strictly through tuition and fees paid by students in China. Nonetheless, two universities reported receiving some funding from their respective state governments in academic year 2014-2015. Four universities reported receiving funding for their institutions in China from U.S. private sources in academic year 2014-2015, including for financial aid. One university reported receiving more than $1 million from these sources, and another stated that 15 percent of its China institution’s budget came from private philanthropy and programs for executives. A few of these universities also noted that U.S. private sources provided nonmaterial support such as advice and student career assistance, including internships and job recruitment. Half of the universities we reviewed reported that their institutions in China generated, or they expected them to generate, net revenue for the U.S. university. Of these, four universities reported that on net, their institutions in China had provided funds to the U.S. university. Two additional universities reported that they expect net gains in future years. However, some universities noted that they did not view the net gains as profits. As one university explained, it did not consider its institution in China to be a moneymaking operation because it reinvests net revenue in its programs in China rather than in its campuses in the United States. In addition, four other universities said their institutions in China had neutral impact on their budgets. Of the 12 universities we reviewed, only 1 university reported that its campus in the United States provided net revenue to its institution in China to cover, for example, ongoing programmatic and oversight costs to ensure quality and architectural standards during construction. Officials said that the university agreed to proactively invest in its institution in China to ensure that it conformed to university standards and ensure its success. As such, officials considered these expenses to be worthwhile investments. Figure 4 shows the reported impact of U.S. universities’ institutions in China on the universities’ budgets. Almost all of the universities that responded to our question about student tuition said it was an important source of revenue for their institutions in China, and several said their China institutions relied on tuition to a greater extent than their programs in the United States. The extent of reliance on tuition varied. Some universities said their China institutions’ operating budgets were entirely or almost entirely supported by tuition, while another university stated that about 60 percent of its budget came from tuition. Most U.S. universities we reviewed include provisions in written agreements with their Chinese partners or other policies intended to uphold academic freedom or U.S. academic standards. In addition, we found that a few universities’ written agreements and other policies include language indicating that the members of the university community will have access to information, which may suggest protections for Internet access, while about half include language addressing access to physical and online libraries. About half of the universities that we reviewed address at least one of the freedoms of speech, assembly, and religion or worship in university policies. Most universities we reviewed include language in their written agreements or other policies that either embody a protection of academic freedom or indicate that the institution in China will adhere to academic standards commensurate with those at their U.S. campus. Table 1 displays the extent to which the written agreements or other policies of universities in our review include language related to protections of academic freedom for their institutions in China. Six universities in our review include language in either their written agreements or other university policies that indicates a protection of academic freedom, such as permitting students to pursue research in relevant topics and allowing students to freely ask questions in the classroom. For example, one university’s agreement states that all members of and visitors to the institution in China will have unlimited freedoms of expression and inquiry and will not be restricted in the selection of research, lecture, or presentation topics. Another university’s agreement states that the institution will be centered on open inquiry and flow of information, while a third university’s student handbook states that the university will guarantee the right to pursue academic topics of interest. One additional university has language in a faculty handbook that indicates both a protection of and potential restriction to academic freedom. This institution’s faculty handbook includes language that protects academic freedom but also encourages self-censorship to prevent externally imposed discipline. Another three universities’ written agreements include language indicating that the institution in China will adhere to academic standards commensurate with either the U.S. campus or the university’s accrediting agency or other authoritative bodies. For example, one university’s agreement states that the academic policies and procedures at the institution in China shall comply with those of the U.S. university, while another university’s agreement states that the institution in China will conform to the requirements of the accreditation commission with jurisdiction over the university. The accrediting agencies responsible for the universities in our review all have language in their standards regarding academic freedom. Finally, one university’s written agreement does not mention academic freedom. About half of the universities we reviewed have agreements or policies that address access to information by outlining responsibilities for themselves and their Chinese partners for providing access to physical or digital libraries. In addition, universities’ documents vary in whether the U.S. or Chinese university partner will provide access to this information. Chinese university partners that provide access to the library or Internet and other technological resources may be subject to Chinese government restrictions. For example, one university’s student handbook affirms that the U.S. university will provide on-campus access to digital libraries and computers for homework and academic use. Another university’s written agreement states that the Chinese partner will provide requisite learning resources, such as textbooks, classrooms, computer labs, and library facilities. A few universities’ written agreements and other policies include language indicating that members of the university community will have full or complete access to information. Such language may suggest protections for Internet access. For example, one university’s student handbook states that students will be active learners guaranteed the right to pursue academic topics of interest, with full access to information and relevant scholarship. However, these universities do not discuss in written agreements or other policies if Internet access on campus is subject to Chinese government censorship. Moreover, through our visits to universities in China, we found that one of the universities that include language suggesting uncensored Internet access would be provided did not have such access on campus. A few universities’ documents include language that indicates possible Internet constraints. For example, one university’s student handbook outlines student responsibilities such as appropriate use of the Internet according to the regulations of the institution in China, further stating that browsing illegal websites is forbidden. In addition, a few universities include language that prohibits the use of technology and resources for activities prohibited by law. Such provisions are a reminder that university students may face difficulty when conducting academic research in China due to government censorship on search engines, news outlets, and social media websites. About half of U.S. universities address at least one of the freedoms of speech, assembly, and religion or worship at their institutions in China. Written agreements and policies for about half of the universities we reviewed include language that suggests a protection of at least one of the freedoms—speech, assembly, and religion or worship—though the number of universities addressing each freedom varies. One other university includes language that suggests a possible restriction on speech. Table 2 shows examples of statements included in written agreements or other policies to illustrate either protections or restrictions of these freedoms. Regarding freedom of speech, student and faculty handbooks at a few of these universities contain language indicating that students have the ability to discuss sensitive topics. Regarding freedom of assembly, a few U.S. universities state in policy documents that faculty or students may form unions or other groups, but one of these universities specifies that the student union will coordinate with, or be administered by, the Chinese partner university. Regarding freedom of religion or worship, none of the university’s agreements or policies contains language indicating a restriction on individuals’ ability to practice their religion. Moreover, several of the universities include language in their policy documents indicating that religious practices will be protected. For example, one university’s student handbook states that the institution in China recognizes the importance of spiritual life for members of the community and will assist members in locating a place of worship off campus. In contrast, one university’s faculty handbook notes that faculty should proceed carefully when broaching the subject of religion in the classroom. Faculty, students, and administrators we interviewed generally indicated that they experienced academic freedom at U.S. universities’ institutions in China, but they also indicated that Internet censorship, self-censorship, and other factors presented constraints. The institutions’ legal status may be correlated with greater academic and other freedoms experienced on campus. Universities have indicated they are monitoring a new Chinese law regulating foreign non-governmental organizations and have outlined varying approaches to address possible infringements to academic freedom at the institutions. The more than 130 faculty and students we interviewed from seven universities’ institutions in China generally reported that academic freedom has not been restricted (see app. I for more information on the numbers and types of faculty and students we interviewed). Faculty we interviewed told us they did not face academic restrictions and could teach or study whatever they chose. For example, several faculty members asserted that neither they nor their colleagues would tolerate any academic restrictions, and one faculty member told us he and his colleagues intentionally introduced class discussions on politically sensitive topics to test whether this would trigger any complaints or attempted censorship. Other faculty members told us that they had never been told to teach or avoid certain subjects or that their experiences of teaching in China and the United States were comparable. Several faculty members who had also taught at Chinese universities not affiliated with a U.S. university noted that students and teachers could not talk as freely at the Chinese university, with one faculty member noting he had specifically been told not to discuss certain subjects while at the Chinese university. Students also generally indicated that they experienced academic freedom and could study or discuss any topic. Similar to faculty we interviewed, some students who had also studied or knew others who studied at Chinese universities contrasted their experiences. For example, students noted that they could have interactive dialogue with faculty, discuss sensitive topics, and freely access information at the U.S. institution in China but not at a Chinese university. In addition, potentially sensitive topics seemed to be freely discussed at some of the institutions we visited, based on our meetings with students and faculty. The topics included Tiananmen Square, protests in Hong Kong, Taiwan, abortion, prostitution in China, and legalization of drugs. We also observed classes at one institution where students and teachers discussed ethnic minorities in Chinese society, U.S.-China relations, the U.S. military presence in the South China Sea, and China’s increasing use of ideological and information controls. Through interviews and responses to our questionnaire, university administrators reported that academic freedom was integral to their institutions in China. Administrators at several universities told us that academic freedom was nonnegotiable, while others noted that the same curriculum used in the United States also applied to their institution in China. Most universities reported that academic freedom was not at all restricted for faculty or students. Several, however, reported that they either did not know the extent of academic freedom at their institution in China or that it was slightly restricted. For example, administrators from one university, which reported that students’ academic freedom was slightly restricted and freedoms of speech, assembly, and religion or worship were moderately restricted, noted that its students were Chinese citizens and subject to all applicable rules and regulations intended for Chinese students. In addition, U.S. universities reported that they generally controlled curriculum development and led or influenced faculty hiring. All 12 universities we reviewed reported that they led or played a leading role in curriculum development, with most reporting that they effectively controlled this process. Several universities noted that the curriculum used for their institution in China was the same as the curriculum used on their U.S. campus, while several others noted that they designed and developed a new curriculum or modified their existing curriculum specifically for the institution in China. At several universities we visited, selected courses addressed the U.S. Constitution’s relevance to China, comparative Chinese-American legal cultures, American foreign policy in Asia, and the Cultural Revolution. With regard to faculty hiring, most universities indicated that they either exerted more authority than their Chinese partner over faculty hiring, including in some cases recruiting, vetting, and recommending candidates for hire, or played a collaborative role in the process. For example, administrators from one university told us that faculty candidates were interviewed first by a committee at the university’s U.S. campus and again at the institution in China, while administrators from another university told us that they entirely controlled faculty recruitment. Administrators from several universities noted, however, that while they controlled or influenced processes related to faculty hiring, an official from their Chinese partner university technically had final faculty-hiring authority. More broadly, administrators identified various goals related to establishing their institution in China. For example, administrators from at least half of the universities we reviewed reported that goals included providing U.S. students with an international education experience, providing Chinese students with an American education experience, enhancing U.S.-Chinese research collaboration and knowledge exchange, strengthening U.S.-Chinese relations, attracting Chinese students to further studies at the U.S. university’s U.S. campus, and providing faculty additional locations for teaching and research. Five of the 12 U.S. universities in China that we reviewed reported uncensored Internet access, generally through use of a virtual private network. As figure 5 shows, the remaining universities reported that they do not have complete access to uncensored Internet content in China. We visited universities that had uncensored Internet access and universities that did not. Correspondingly, as is shown in figure 6, we observed university members accessing search engines, newspapers, and social media sites that have been blocked in China—such as the New York Times, Google, and Facebook—at some universities but not others. Administrators at the three universities we visited that have uncensored Internet access told us that uncensored access was available to all university members throughout campus and was an integral aspect of their institution in China. An administrator at one of these universities, however, told us that the university is required by the Chinese government to track and maintain records for several months of faculty, student, and staff Internet usage, including the Internet sites visited by faculty and staff. The administrator added that, to date, no Chinese government official had asked for these records. Administrators at other universities we visited told us that they were either not required to track Internet usage of faculty or students or that they were unaware of any such requirement. Internet Search Results in China May be Filtered by Language Search results can also be filtered depending on the language used. An English language search of “Tiananmen Square” on one search engine references what the State Department has characterized as the Chinese government’s violent suppression of protests in and around Tiananmen Square in1989, as shown above; however, an image search in Chinese language on the same search engine instead provides mostly tourist images about Tiananmen Square. At several universities that lacked access to uncensored Internet content, students and faculty told us that, as a result, they sometimes face challenges teaching, conducting research, and completing coursework. For example, one faculty member told us that she sometimes asks others outside of mainland China to conduct Internet research for her because they can access information she cannot. A student at one university told us she needed to access a certain scholarly database typically blocked in China, while several students at another university told us their ability to conduct academic research was constrained by the Internet limitations. Students at one university told us that the educational software used in some classes relied on tools developed by a search engine provider blocked in China and that this software would therefore sometimes not function. Individuals at several other universities noted that faculty had to adapt to the Internet restrictions, for example, by accessing websites comparable to those censored in China, such as sites comparable to YouTube for sharing videos or to Gmail for sending email. Administrators, faculty, and students at several of these universities told us that individuals often used virtual private networks to mitigate Internet restrictions, but some students and faculty told us that these networks had limitations. For example, students and faculty at one university in China told us that the U.S. university provided access to its virtual private network but that it was not always reliable. Some students and faculty noted that they had purchased access to their own virtual private networks. At the time of our visit, however, some students told us that some of these commercial networks were operating poorly, causing them to revert to using the university’s network. Several universities we reviewed not only faced Internet censorship but also experienced restricted service. For example, at one university Internet service is unavailable in certain buildings, and some students cannot use the Internet in dormitories after 11 p.m, according to university administrators. In addition, several individuals told us that Internet access is sometimes blocked or significantly slowed at night, speculating that this was due to the university network’s bandwidth being taxed by the number of students playing video games during those hours. University library services may offset Internet restrictions to some degree. All five universities we visited provided university members access to the university’s main online library, which included access to research journals and other publications that may otherwise have been blocked in China. Universities’ on-campus libraries varied in size and offerings. Two universities we visited featured libraries that were recently built or renovated and that enabled students to browse or select books directly from the shelves. As figure 8 shows, one of these libraries contained books on topics such as Taiwan, Tibet, and Tiananmen Square, which may be banned or difficult to obtain in China. The other library had more than 120,000 books, including both English and Chinese. Administrators at both of these universities told us that no book had ever been removed from the library by Chinese government officials, though one of them noted that, in the past, Chinese Customs officials confiscated some books intended for the library. To compensate, faculty traveling from the United States to China had occasionally brought books for the library in their personal luggage. In contrast, a former student at another university we visited told us the only library available was that of the Chinese university and said a study room intended for U.S. students contained a limited number of English language titles. We found that several factors can create obstacles to learning at universities we reviewed, including self-censorship, constraints specific to Chinese students, and restrictions beyond campus borders. Self-censorship: While we were told of examples of self-censorship— choosing to not express an idea or thought that may offend others or cause other problems—at universities we reviewed, it is difficult to assess the extent to which this occurs. Several faculty members we interviewed noted that it is ultimately not possible to know whether, when, or the extent to which self-censorship takes place. For example, individuals may self-censor unconsciously or may knowingly self-censor but not acknowledge doing so. Moreover, self-censorship can occur in any number of settings and with different motivations. Nonetheless, administrators, faculty, and students representing more than half of the universities we reviewed gave examples of self- censorship, including some cases where individuals were advised by their teachers or others in positions of authority to avoid certain topics. For example, an administrator at one university noted that he believed it was advisable, as a guest of China, to refrain from insulting China, while an administrator at another university noted that the university advises teachers to avoid discussing sensitive subjects in class. Several professors at one university told us that they avoid certain political topics or topics that may make others uncomfortable. At another university, faculty told us they try to be respectful of the host country in treating certain academic subjects, and one professor told us he believes he should not discuss Tiananmen Square. One professor told us he advised students to avoid presentations on sensitive topics, while a few students from a few universities told us they had been advised to avoid certain sensitive topics, such as Tiananmen Square or China’s relationship with Taiwan. Other students at several universities told us they avoided certain topics for various reasons—for example, to avoid starting arguments with their Chinese classmates or out of concern that raising certain topics may cause other problems. Several other students reported to us that they specifically avoided discussing religion or political topics because they thought it might be inappropriate or cause trouble. In addition, several faculty members noted that self-censorship may affect research efforts given that publishing articles on certain topics may jeopardize the researcher’s ability to obtain a visa to visit or work in China. Constraints specific to Chinese students: Some conditions specifically affecting Chinese students may constrain their academic experience. Faculty and students from various universities observed that in general Chinese students participated in classroom discussions less often than students from other countries. Some suggested that Chinese students may be uncomfortable with Western teaching methods or inhibited by language limitations; however, some noted that Chinese students may know or suspect that their Chinese classmates are government or Communist Party monitors and will report on whatever the students say. Moreover, an administrator at one university told us that he assumes there are Chinese students and faculty in the institution who report to the government or the Communist Party about the activities of other Chinese students. Faculty members at several universities also told us that they understood there were Chinese students in class who intended to report on the speech of faculty or Chinese students. Several faculty members told us they had adopted various teaching approaches to circumvent these constraints and encourage greater participation among Chinese students. One professor told of constructing classroom debates in which students were required to argue both sides of a sensitive political issue regardless of their nationality or belief; he believed that this enabled Chinese students to speak more freely about these topics. We found other examples of certain conditions Chinese students face that could constrain their academic experience. For example, one university provides only non-Chinese students access to its main university web portal, which provides uncensored Internet access. In addition, according to university administrators, only Chinese students must complete military training and take certain courses, such as on Chinese political thought. A student at one university told us Chinese students had a curfew and Internet restrictions while international students did not. Restrictions off campus: Administrators, faculty, and students at several universities emphasized that they have certain freedoms on campus, such as the ability to teach or discuss whatever they want, but not off campus. They offered examples of how some typical activities in an American college setting are not possible in China. For example, a faculty member at one university told us she once brought her class to a coffee shop adjacent to campus, but the public location stifled discussion. Students at another university told us that they ended a classroom discussion that had been continued on a social media site because some classmates believed the discussion was inappropriate. One student noted that one Chinese student considered reporting the discussion to Chinese censors. The three universities we reviewed that are approved by the Chinese Ministry of Education as having independent legal status share characteristics that may be correlated with greater academic and other freedoms on campus. We found that these universities had campuses built specifically for the joint institution that were located relatively far away from their Chinese university partner’s campus, generally controlled their own day-to-day operations, had uncensored Internet access, offered extensive campus and student life programs, and sought to engage with Chinese entities beyond campus. In contrast, as table 3 shows, the other nine universities we reviewed did not consistently share these characteristics. Moreover, we found various examples from these other universities indicating that Chinese entities exert a greater degree of influence over their institutions than over institutions with independent legal status. University administrators told us they are aware of, and monitoring, a new Chinese law regulating foreign non-governmental organizations. After the law’s passage, the U.S. Secretary of State issued a statement noting that, while the final version of the law included improvements from prior drafts, it could nonetheless negatively impact foreign non-profit non- governmental organizations, and their Chinese partners. We asked universities we reviewed to comment on the law and its potential impact on their institutions in China. The six universities that responded indicated either that they believed the law would not have an impact or that it was unknown or too early to tell, while several noted that they would continue to monitor the law’s implementation. According to a State official, it is too soon to tell what, if any, impact the law may have on universities. Universities may take different actions if Chinese law or other factors create infringements to academic or other freedoms. University administrators and faculty members have outlined ways individuals could raise concerns to respond to potential infringements on academic and other freedoms, such as by contacting university administrators based in the United States or China, speaking directly with institution directors, or raising concerns through the student government or academic senate. Administrators at several universities asserted that they would discontinue their institution and leave China if they encountered problems related to academic freedom that they were unable to resolve. Academics have made other recommendations to protect academic freedom, including that universities make public their agreements with Chinese partners, ensure these agreements state that cooperative education institutions will be terminated if academic freedom is compromised, and establish an office to investigate and report on academic freedom infringements. In recent years, a growing number of U.S. universities have established degree-granting institutions with Chinese partners in an environment, as characterized by the Department of State, of worsening human rights and academic freedom conditions in China. We found that universities generally emphasize academic freedom at their institutions in China and, in most cases, include language seeking to protect these or other freedoms in written agreements and other documents. Nonetheless, the environment in which these universities operate presents both tangible and intangible challenges. In particular, Internet censorship presents challenges to teaching, conducting research, and completing coursework. However, it is much more difficult for universities to know the degree to which faculty or students self-censor or how this may affect academic freedom. Moreover, given that motivations to self-censor can be deeply rooted in individual concerns and shaped by long-established conditions in China, universities have limited ability to prevent self-censorship in the classroom or on campus. Members of universities we reviewed indicated they have freedoms on campus that do not exist beyond it, suggesting that they operate within a protected sphere in China. But the universities clearly vary in this regard, with a few seeming to be less subject to influence from Chinese entities than others. As Department of State officials have noted, it is too soon to tell whether the recent passage of a Chinese law regulating foreign nongovernmental organizations could signal tightening restrictions on universities. We are not making recommendations in this report. We provided a draft of this report to the Departments of Education and of State for comment. The agencies responded that they had no comments. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report’s date. At that time, we will send copies to the appropriate congressional committees and to the Secretaries of Education and State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at [email protected] or 202-512-3149. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. In this report, we reviewed (1) funding and other support provided by the U.S. government and other sources to U.S. universities to operate in China; (2) the treatment of academic and other key freedoms in arrangements between U.S. universities and their Chinese partners; and (3) the experience of academic and other key freedoms by faculty, students, and staff at selected U.S. universities in China. To address these objectives, we reviewed the 12 U.S. universities that we identified as having partnered with Chinese universities to establish degree-granting institutions in China: Carnegie Mellon University, Duke University, Fort Hays State University, Johns Hopkins University, Kean University, Missouri State University, New York Institute of Technology, New York University, Northwood University, Rutgers University, the University of Michigan, and the University of Pittsburgh. To identify universities, we reviewed several information sources, including the Department of Education’s list of U.S. universities with locations in China and the Chinese Ministry of Education’s data on U.S. universities approved to operate cooperative education institutions and cooperative education programs in China. (See app. II for these Ministry of Education lists.) We determined that we did not intend to review the more than 130 U.S. universities that have established individual education programs in China but rather the universities that, at the time of our review, were approved to operate cooperative institutions. The results we report are therefore not necessarily generalizable to all U.S. universities that have partnered with Chinese universities to establish cooperative education programs. During our review, two other universities were also approved to operate cooperative education institutions in China. Because of the status of and extent of completed work for our review of the 12 universities, we did not include these universities in our review. In addition, we reviewed documents of independent organizations that list, review, and outline standards for higher education partnerships abroad and interviewed officials from several of these organizations. One of these organizations maintains a list of “international branch campuses.” We reviewed this list, discussed it with its authors, and found that it was generally consistent with the Chinese Ministry of Education’s list of approved cooperative education institutions. In addition, we interviewed university administrators to better understand the characteristics of their institutions in China. As a result of these interviews, we decided to include in our review one additional university that was not included on the Chinese Ministry of Education’s list of approved cooperative education institutions. We also interviewed officials from several organizations associated with international higher education to better understand types of overseas higher education programs, including those in China; trends in U.S.- Chinese educational cooperation; standards for international higher education; and academic freedom protections at such programs, among other topics. In our reporting information about the 12 universities, we did not attribute information or statements to them by name. We used the following terms to report the results of our review of these universities: “most” represent 8 to 11; “about half” represent 5 to 7; and “several” or “a few” represent 2 to 4. We sent a questionnaire to administrators of all 12 universities in our sample asking about a variety of topics. As part of the questionnaire development, we submitted the questionnaire for review by a GAO survey specialist. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pretested our questionnaire with administrators from three universities. During the pretests, conducted by telephone, we asked the administrators to read the instructions and each question aloud and to tell us how they interpreted the question. We then discussed the instructions and questions with them to determine whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, (3) the questionnaire was unbiased, (4) the questionnaire did not place an undue burden on the officials completing it, and (5) the identification of potential solutions to any problems detected was possible. We noted any potential problems. We modified the questionnaire based on feedback from the pretests and internal GAO review as appropriate. We sent the Microsoft Word form questionnaire and a cover email to the universities on February 26, 2016, and asked them to complete the questionnaire and email it back to us within 2 weeks. We closed the questionnaire on June 3, 2016. Eleven universities provided detailed responses on the questionnaire form; one university provided useful narrative responses but did not answer the questionnaire itself. Therefore, the overall response rate for the questionnaire was 92 percent. Some universities declined to answer some questions, especially about financial information, so the item-level response rate varies by question. Because we are not trying to generalize the results of the questionnaire to other universities outside that sample, there was no questionnaire sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the questionnaire results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors and help ensure the accuracy of the answers that were obtained. For example, a social science survey specialist designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend. The questionnaire was also reviewed by an additional GAO survey specialist, as mentioned above. Data were manually entered from the Word questionnaires into an Excel spreadsheet that was then imported into a statistical program for analyses. All data entry was checked and any errors corrected. We examined the questionnaire results and performed computer analyses to identify missing data, inconsistencies, and other indications of error and addressed such issues as necessary, including through follow-up communications with the universities. Quantitative data analyses were conducted by a GAO survey specialist using statistical software, and a review of open-ended responses was conducted by the GAO staff with subject matter expertise. An independent GAO data analyst checked the statistical computer programs for accuracy. To identify funding and other support the U.S. government and other sources have provided to U.S. universities to operate in China, we analyzed responses to the questionnaire we sent to administrators, interviewed administrators from all 12 universities, and reviewed university documents. The questionnaire included questions about the sources of funding and nonmaterial assistance used to establish and operate the institutions in China, about financial aid to students, and about the financial relationship between the U.S. university and the institution. We also interviewed university administrators about the funding and other support their institutions in China received. Because of differences in the ways universities tracked and reported on their funding sources, we were not able to report funding amounts for each university or to calculate the percentage of each university’s institution budget funded by Chinese government entities, private donors, and other sources. However, by combining information we obtained from the questionnaire, interviews, and university documents, we were able to identify the types of support provided. We also obtained information and interviewed officials from the Department of Education (Education) and reviewed relevant federal laws and regulations, including those related to financial aid under Title IV of the Higher Education Act of 1965, as amended. To determine the treatment of academic and other key freedoms — specifically freedoms of speech, information, assembly, and religion or worship—in arrangements between U.S. universities and their Chinese partners, we reviewed written agreements and university policies submitted by the U.S. universities. University policies include faculty and student handbooks as well as other planning documents for the institutions in China. Of the 12 U.S. universities that participated in our review, 9 provided either all or a part of the written agreement with their Chinese partner universities, and 8 provided university policies. In total, 11 of the 12 universities we reviewed submitted either their written agreement or other university policies pertaining to their programs in China. We conducted a content analysis of the written agreements and other policies to identify instances in which the U.S. universities address academic freedom and other key freedoms. To define academic freedom, we derived our definition from the American Association of University Professors’ 1940 Statement of Principles on Academic Freedom and Tenure, to which hundreds of U.S. universities adhere. We also identified freedoms of information, speech, assembly, and religion or worship as other key freedoms relating to universities operating in China given the significance of these freedoms to universities in the United States and reported restrictions related to these freedoms in China. We derived our definition for freedom of information from the United Nations’ 2011 Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, which designates Internet freedom as a basic human right. In addition, we derived definitions for the freedoms of speech, assembly, and religion or worship from the United Nations’ 1948 Universal Declaration of Human Rights, which outlines these freedoms as basic human rights. In addition, in our questionnaire we asked university administrators to identify the extent to which their universities’ written agreements or other policies address academic freedom and other key freedoms. We also interviewed administrators from all 12 U.S. universities that participated in our review to learn how these U.S. universities developed written agreements with their Chinese partner universities. For the content analysis, we compared the definitions of each freedom with all the agreements, handbooks, and other official publications provided by the universities to assess whether and how the freedom was referenced in those documents. One GAO analyst conducted this analysis, coding the information based on how universities referenced the freedom and entering it into a spreadsheet, and another GAO analyst checked the information for agreement. Any initial disagreements in the coding were discussed and reconciled by the analysts. To learn about the experiences of academic freedom and other key freedoms by faculty, students, and administrators at selected U.S. universities in China, we interviewed administrators from all 12 universities. We also visited five universities in China, where we met with administrators, faculty, and students. In addition, we interviewed faculty and students who had previously studied or taught at six of the universities and were currently living elsewhere. All interviews were conducted in English. Overall, we interviewed more than 190 administrators, faculty, and students, including the following: More than 70 administrators from 12 universities, including university presidents and other executive officials as well as staff from various offices such as those supporting student life and other student services, libraries, and information technology. More than 35 faculty members from seven universities, including more than 30 U.S. citizens, several Chinese citizens, and one citizen from another country. The faculty members we interviewed included both those sent from the U.S. campus to teach at the institution in China on a temporary basis and those hired specifically to teach at the university institution in China. More than 95 students from six universities, including interviewing roughly an equal mix of U.S. and Chinese citizens on campuses in China, and several students from other countries. Our interviews with students included a mix of one-on-one interviews and discussion groups. In addition, nearly 40 of these students that we interviewed from two universities also completed a written questionnaire. We asked them to complete the questionnaire because we believed some students might be more willing to answer candidly on an anonymous questionnaire than during an oral interview. To maintain their anonymity and to encourage candid responses students were instructed not to write their names on the questionnaire. The questionnaire was in English, and students were asked to respond in English, as they were enrolled in courses taught in English. The questionnaire addressed the same general topics that were used to guide our student interviews and discussion groups. We analyzed responses to the written student questionnaires alongside our analysis of student interviews and discussion groups. In selecting universities to visit, we included both public and private universities; universities with institutions in different locations within China; universities that established institutions in China at different points in time; and universities with institutions having varying student demographics, including several with predominantly Chinese student bodies and several with a mixture of students from the United States, China, and other countries. We also planned to visit two additional institutions, but our visits were declined by, in one case, the Chinese university partner and, in the other case, by the provincial ministry of education with jurisdiction over the institution. In addition to conducting interviews during our visits to these universities, we also reviewed campus facilities including classrooms, libraries, cafeterias, and dormitories. In our discussions with faculty members and students, we addressed topics such as the reasons they chose to work or study at the programs in China, the extent to which they may be constrained from teaching or studying certain topics, their experience of Internet access in China, campus life and student activities, any differences they have perceived or experienced between U.S. and Chinese faculty and students, and other topics relating to their experience at the program particularly as it relates to academic and other freedoms. To mitigate possible limitations of testimonial evidence from individuals in China regarding their experience of academic and other freedoms, we interviewed both U.S. and Chinese students and faculty; interviewed faculty and students currently in China as well as faculty and students currently in the United States who had formerly taught or studied at the university in China; requested that faculty and students participate in our interviews on a voluntary basis; and offered students the option of completing an anonymous written questionnaire. We also analyzed university administrator responses to our questionnaire applicable to their university in China, including questions relating to curriculum and faculty hiring; faculty and student experiences of academic freedom, freedom of information, freedom of speech, freedom of assembly, or freedom of religion or worship; and how, if at all, standards and protections for these freedoms differ between U.S. and Chinese students or in comparison with those to the university’s U.S. campus, among others. The information on foreign law in this report is not the product of GAO’s original analysis but is derived from interviews and secondary sources. We conducted this performance audit from September 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Chinese Ministry of Education has approved 13 U.S. universities to operate 16 cooperative education institutions, according to the ministry. Table 4 provides a full list of these 13 U.S. universities and their institutions. As the table indicates, two universities operate more than one cooperative education institution. We did not review the cooperative education institutions approved for the University of Illinois and the University of Miami, as these two institutions were approved subsequent to the start of our review. Although we reviewed Johns Hopkins University’s institution in China, it is not included in table 4 because it was established before the Chinese Ministry of Education established its approval process for Sino-Foreign cooperative institutions and programs. According to the Chinese Ministry of Education, 133 U.S. universities have been approved by the ministry to operate 225 cooperative education programs in partnership with Chinese universities. Such cooperative education programs can take various forms, such as joint or dual degree programs. Table 5 provides a full list of these programs. This list is based on information taken directly from the Chinese Ministry of Education’s website. We did not review these programs or verify that all of them are currently operating. David Gootnick, (202) 512-3149, or [email protected]. In addition to the contact named above, Melissa Emrey-Arras (Director), Jason Bair (Assistant Director), Meeta Engle (Assistant Director), Joe Carney, Marissa Jones, Sean Manzano, James Bennett, Jessica Botsford, Mark Dowling, Mary Moutsos, Reid Lowe and Michael Silver made key contributions to this report.
In its Country Reports on Human Rights Practices for 2015, the Department of State (State) concluded that academic freedom, a longstanding concern in China, had recently worsened. At the same time, the number of U.S. universities establishing degree-granting institutions in partnership with Chinese universities—teaching predominantly Chinese students—has increased. While universities have noted that these institutions offer benefits, some academics and others have raised questions as to whether faculty, students, and staff may face restricted academic freedom and other constraints. This report reviews (1) funding and other support provided to U.S. universities to operate in China; (2) the treatment of academic and other key freedoms in arrangements between U.S. universities and their Chinese partners; and (3) the experience of academic and other key freedoms by faculty, students, and staff at selected U.S. universities in China. GAO reviewed 12 U.S. universities that have established degree-granting institutions in partnership with Chinese universities; interviewed and obtained university documents and questionnaire responses; interviewed faculty and students; and visited the campuses of 5 institutions selected on the basis of their location, student demographics, date of establishment, and other factors. GAO also interviewed officials and obtained information from the Departments of Education (Education) and State. GAO makes no recommendations in this report. Education and State had no comments on a draft of this report. The 12 U.S. universities GAO reviewed generally reported receiving support for their institutions in China from Chinese government entities and universities, with limited funding from U.S. government agencies and other donors. Universities reported contributions from Chinese provincial and local governments and from partner universities for land, building construction, and use of campus facilities. Fewer than half of the universities reported receiving federal funding. Almost all of the U.S. universities said their programs in China generated net revenue for the university or had a neutral impact on its budget. Universities' agreements with their Chinese partners or other policies that GAO reviewed generally include language protecting academic freedom or indicating their institution in China would adhere to U.S. standards. About half of universities GAO reviewed address access to information, such as providing faculty and students with access to physical or online libraries, though few universities' agreements and policies include language protecting Internet access. About half of the universities' policies include language indicating protection of at least one other key freedom—speech, assembly, or religion. University members generally indicated that they experienced academic freedom, but they also indicated that Internet censorship and other factors presented constraints. Administrators said they generally controlled curriculum content, and faculty and students said they could teach or study what they chose. However, fewer than half of the universities GAO reviewed have uncensored Internet access. At several universities that lacked uncensored Internet access, students and faculty told us that, as a result, they sometimes faced challenges teaching, conducting research, and completing coursework. Administrators, faculty, and students also cited examples of self-censorship, where certain sensitive political topics—such as Tiananmen Square or China's relationship with Taiwan—were avoided in class, and of constraints faced by Chinese students in particular. Universities approved by the Chinese Ministry of Education as having independent legal status share characteristics—such as campuses located away from their Chinese university partner's campus and extensive student life programs—that may be correlated with greater academic freedom and other key freedoms.
An agency within the U.S. Department of Health and Human Services (HHS), FDA is responsible for promoting and protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices, and ensuring the safety and security of our nation’s food supply, cosmetics, and products that emit radiation. The agency is also responsible for ensuring the proper labeling of foods, drugs, medical devices, tobacco, and cosmetics. Its work also includes advancing public health by facilitating innovations and promoting public access to science-based information on medicines, devices, and foods. The agency does not regulate meat, poultry, and certain egg products, which are regulated by the U.S. Department of Agriculture. FDA performs regulatory activities that include reviewing and approving new drugs and certain medical products; inspecting manufacturing facilities for compliance with regulations and good manufacturing practices; and conducting postmarket surveillance of food, drug, and medical products to ensure that products are safe; tracking and identifying the source of outbreaks of foodborne illnesses; and issuing recall notices and safety alerts for products that threaten the public health. FDA exercises its core functions through four directorates: the Offices of Medical Products and Tobacco; Foods; Global Regulatory Operations and Policy; and Operations. These offices, along with the Office of the Chief Scientist, report to the FDA Commissioner and carry out their missions through seven centers and through FDA’s ORA. Office of Medical Products and Tobacco: Center for Biologics Evaluation and Research. Regulates and evaluates the safety and effectiveness of biological products, such as blood and blood products, vaccines and allergenic products, and protein-based drugs. Center for Drug Evaluation and Research. Promotes and protects the public health by ensuring that all prescription and over-the-counter drugs are safe, as well as by reviewing and regulating clinical research. Center for Devices and Radiological Health. Promotes and protects the public health by ensuring the safety and effectiveness of medical devices and preventing unnecessary human exposure to radiation from radiation-emitting products. Center for Tobacco Products. Oversees tobacco product performance standards, reviews premarket applications for new and modified risk tobacco products and new warning labels, and establishes and enforces advertising and promotion restrictions. Center for Food Safety and Applied Nutrition. In conjunction with FDA’s field staff, promotes and protects the public health, in part by ensuring the safety of the food supply and that foods are properly labeled, and ensures that cosmetics are safe and properly labeled. Center for Veterinary Medicine. Promotes and protects the public health and animal health by helping to ensure that animal food products are safe; and by evaluating the safety and effectiveness of drugs to treat companion animals and those used for food-producing animals. Office of the Commissioner: National Center for Toxicological Research. Conducts peer-reviewed scientific research and provides expert technical advice and training to support FDA’s science-based regulatory decisions. Office of Global Regulatory Operations and Policy: Office of Regulatory Affairs. Leads FDA field activities and provides FDA leadership on imports, inspections, and enforcement policy. ORA supports the FDA product centers by inspecting regulated products and manufacturers, conducting sample analysis on regulated products, and reviewing imported products offered for entry into the United States. The office also develops FDA-wide policy on compliance and enforcement and executes FDA’s Import Strategy and Food Protection Plans. FDA relies extensively on IT to fulfill its mission and to support related administrative needs. The agency has systems dedicated to supporting the following major mission activities: Reviewing and evaluating new product applications, such as for prescription drugs, medical devices, and food additives. These systems are intended to help FDA determine whether a product is safe before it enters the market. For example, the Document Archiving, Reporting, and Regulatory Tracking System is intended to manage the drug and therapeutics review process. Tracking and evaluating firms to ensure that products comply with regulatory requirements. For example, the Field Accomplishments and Compliance Tracking System (FACTS) supports inspections, investigations, and compliance activities. Monitoring the safety of products on the market by collecting and assessing adverse reactions to FDA-regulated products, such as illnesses due to food or negative reactions to drugs. For example, the Vaccine Adverse Event Reporting System accepts reports of adverse events that may be associated with U.S.-licensed vaccines from health care providers, manufacturers, and the public. In addition, FDA relies on various systems that support its administrative processes, such as payroll administration and personnel systems. All of the agency’s systems are supported by an IT infrastructure that includes network components, critical servers, and multiple data centers. The information that FDA receives is growing in volume and complexity. According to the agency, from 2001 to 2011, the number of import shipments that it inspected for admission into the United States increased from about 7 million imports reviewed annually to over 22.6 million. Additionally, in 2011, the agency estimated that 15 percent of the U.S. food supply was imported, including 60 percent of fresh fruits and vegetables and 80 percent of seafood. Advances in science and the increase in imports are factors affecting the complexity of information that FDA receives. The ability of the agency’s IT systems and infrastructure to accommodate this growth is crucial to FDA’s ability to accomplish its mission effectively. Compounding these challenges, reports and studies, both by FDA and others, have noted limitations in a number of key aspects of FDA’s IT environment, including data availability and quality, IT infrastructure, the agency’s ability to use technology to improve regulatory effectiveness, and IT management. In 2007, the FDA Science Board issued a report, FDA Science and Mission at Risk, which provided a broad assessment of challenges facing the agency. Specifically, this study found that the agency’s IT infrastructure was outdated and unstable, and it lacked sufficient controls to ensure continuity of operations or to provide effective disaster recovery services. The Science Board also stated that the agency did not have sufficient IT staff with skills in such areas as capital planning/investment control and enterprise architecture; that processes for recruitment and retention of IT staff were inadequate; and that the agency did not invest sufficiently in professional development. Further, the Science Board found that information was not easily and immediately accessible throughout the agency (including critical clinical trial data that were available only in paper form), hampering FDA’s ability to regulate products. Data and information exchange was impeded because information resided in different systems that were not integrated. According to the Science Board, FDA lacked sufficient standards for data exchanges, both within the agency and between the agency and external parties, reducing its capability to manage the complex data and information challenges associated with rapid innovation, such as new data types, data models, and analytic methods. Also in 2007, FDA commissioned Deloitte Consulting, LLP, to examine ways in which the agency could better meet increased demand for information and make decisions more quickly and easily. Deloitte’s study stated that FDA needed to develop both a common enterprise information management architecture and an IT architecture to facilitate both short- term operational gains, such as improved information access, and long- term gains in strategic flexibility. Deloitte noted that FDA’s former decentralized approach to IT, in which the centers developed their own systems, had led to duplicative work efforts, tools, and information. We also have previously reported on FDA’s systems and modernization efforts and noted deficiencies in its IT management. For example, in a June 2009 report on the agency’s plans for modernizing its IT systems, we noted that FDA lacked a comprehensive IT strategic plan that included results-oriented goals and performance measures to guide the agency’s modernization projects and activities. We also pointed out that FDA had made mixed progress in establishing important IT management capabilities that are essential in helping ensure a successful modernization. These capabilities included investment management, information security, enterprise architecture development, and human capital management. To help ensure the success of the agency’s modernization efforts, we recommended that it expeditiously develop a comprehensive IT strategic plan, give priority to architecture development, and complete key elements of its IT human capital planning. FDA agreed with our recommendations and identified actions initiated or planned to address them. In addition, we have previously identified problems with FDA’s Operational and Administrative System for Import Support (OASIS) import-screening system. Specifically, we reported in 2008 that OASIS had an inaccurate count of foreign establishments manufacturing drugs because unreliable manufacturer identification numbers were generated by customs brokers. FDA officials said these errors resulted in the creation of multiple records for a single establishment, which led to inflated counts of establishments offering drugs for import into the U.S. market. While FDA officials acknowledged this problem, they were unable to provide us with an estimate of the extent of these errors. In addition, the agency did not have a process for systematically identifying and correcting these errors. Accordingly, we made recommendations aimed at correcting these deficiencies; however, FDA did not comment on these recommendations. In September 2010, we reported that OASIS still provided an inaccurate count of foreign establishments manufacturing drugs offered for import into the United States. Further, in September 2009, we reported that Customs and Border Protection’s import screening system did not notify OASIS when imported food shipments arrived at U.S. ports. We pointed out that, without access to time-of-arrival information, FDA did not know when shipments that require examinations or reinspections arrive at the port, which could increase the risk that unsafe food may enter U.S. commerce. We therefore recommended that Customs and Border Protection ensure that its new screening system communicates time-of-arrival information to FDA, and the agency agreed with this recommendation. In May 2010, we testified that, according to FDA officials, Customs and Border Protection had modified its software to notify FDA of a shipment’s time of arrival. GAO-08-597. commit to taking action on them. Further, in February 2009, we reported that Customs and Border Protection, the National Marine Fishery Service, and FDA each collected information on seafood products to meet their respective responsibilities, but did not effectively share information that could be used to detect and prevent inaccurate labeling of seafood. As a result, we recommended that the three agencies develop goals, strategies, and mechanisms for interagency information sharing, which the agencies generally agreed with. Finally, in May 2010, we testified that the lack of a unique identifier for firms exporting food products may have allowed contaminated food to evade FDA’s review, and that the agency did not always share information on food distribution lists with states. We pointed out that this impeded states’ efforts to remove contaminated products from grocery stores and warehouses. Driven in part by the various studies of the agency’s IT environment, in May 2008 FDA transitioned to an enterprisewide approach to IT management. Prior to this transition, the agency’s IT management was decentralized, with each center having its own Office of Technology. According to FDA officials, this led to an environment in which systems did not interoperate and were often redundant and investment in IT infrastructure and systems development was inadequate. In moving to an enterprisewide approach, the agency transferred responsibility for managing IT from individual components (centers and ORA) to a new centralized Office of Information Management (OIM). OIM resides within FDA’s Office of Operations and is headed by the Chief Information Officer (CIO). The CIO reports to the agency’s Chief Operating Officer. As head of OIM, the CIO is responsible for managing IT, creating a foundation to enhance the interoperability of systems, and managing more than 400 staff assigned to this office. OIM is composed of five divisions: Business Partnership and Support, Systems Management, Infrastructure Operations, Technology, and Chief Information Officer Support. It is responsible for managing IT and other related services enterprisewide. This includes developing the architecture, standards, policies, governance, best practices, and technology road map that support the business priorities of the agency, including managing IT infrastructure, telecommunications, security, business continuity and disaster recovery, strategic planning, capital planning and investment control, enterprise architecture, and applications development and management; advising and providing assistance to the FDA Commissioner and senior management officials on IT resources and programs; establishing and overseeing implementation of agency IT policy and governance, procedures, and processes for conformance with the Clinger-Cohen Act and the Paperwork Reduction Act; and working with FDA business areas to develop and communicate the overall vision for the agency’s IT program. In early March 2012, the CIO began developing a new Project Management Office.Governance Board is expected to perform investment evaluations and project assessments. FDA’s senior executive team, which is comprised of the Deputy Commissioners, the Associate Commissioner for Regulatory Affairs, Center Directors, and the CIO, is responsible for governance of all IT investments. FDA received about $418 million in IT funding for fiscal year 2012. For fiscal year 2011, the agency’s IT budget was approximately $439 million, as illustrated in figure 1. As illustrated in figure 2, about 60 percent of FDA’s reported IT costs in fiscal year 2011 supported IT operations and infrastructure, such as network servers, telecommunications, and computers, with the remaining 40 percent supporting the development and modernization of IT systems. Federal guidance calls for agencies to prepare and maintain a comprehensive list of their IT systems. Specifically, OMB Circular No. A- 130 guidance calls for a complete inventory of agency information, to include identifying and describing information services, such as systems and databases, used throughout the agency. In addition, GAO’s IT investment management framework, stresses that a foundational practice for effectively managing an organization’s investments is having an up-to-date and complete collection of information on its assets, including systems, software applications and tools, and licensing agreements. According to the framework, to make good investment decisions, an organization should maintain pertinent information about each investment and store that information in a retrievable format, such as a central repository, to be used in future investment decisions. Such a repository is to include, among other things, the current life cycle phase of the system; the responsible organizational unit; the costs to date and anticipated future costs; and the interfaces and dependencies with other systems. The framework also notes that the inventory should contain information used to measure the progress and value of the investments, such as benefits to the mission, schedule, risk assessments, and performance metrics. Without a complete inventory of IT information, an organization cannot develop an adequate investment control process, and consequently, will lack the foundation for demonstrating the impact of alternative investment strategies and funding levels for the agency’s inventory of information resources. Although FDA reported spending approximately $439 million for IT investments in fiscal year 2011, the agency does not have a comprehensive list of IT systems identifying and providing key information about the systems that it currently uses or is developing. In response to our request for an inventory of systems, FDA officials pointed to two sources that partly identified key elements of the agency’s systems: information contained in key budget and planning documents it prepares annually for submission to OMB, and a list of 21 mission-critical systems(see app. III for the list of 21 systems and modernization initiatives). However, while these sources identified certain key investments with varying levels of detail as to cost, purpose, and status, the CIO and agency officials responsible for developing an inventory acknowledged that the information was not comprehensive and lacked critical details about systems that would be essential to effectively managing the agency’s IT investments. Specifically, OMB requires federal departments and agencies, including the Department of Health and Human Services—of which FDA is a component—to annually provide information related to their IT investment portfolios (called exhibit 53s) and capital asset plans and business cases for major investments (called exhibit 300s). The purpose of the exhibit 53 is to identify all IT investments—both major and nonmajor—and their associated costs for which funding is being sought in a particular fiscal year. The exhibit 300s provide a business case for each major IT investment, and agencies are required to provide information on each major investment’s cost, schedule, and performance. For fiscal year 2011, FDA’s exhibit 53 identified development and operations and maintenance costs for 44 IT investments. (See app. IV for a list of the 44 IT investments.) For example, one of the 44 line items in the exhibit 53 identified an investment for FDA’s Information and Computing Technologies for the 21st Century (ICT21), with about $68 million in funding for fiscal year 2011. In addition, FDA submitted an exhibit 300 for eight major investments. Among these investments were ICT21 and the Automated Laboratory Management project, which is to facilitate communication between FDA labs by creating an electronic environment based on a standardized format. However, while these documents contain key IT information, such as costs of the investments, they did not present a comprehensive list of FDA’s systems with the detailed information that would be essential to managing the agency’s portfolio. For example, the exhibit 53 provides investment cost information for the previous year, current year, and budget year, but does not include any information on the performance of the investments. Further, while exhibit 300s provide information on the major investments, they do not provide comprehensive detailed information on the systems that comprise these investments. For example, exhibit 300s may not include detailed information on the systems’ interfaces, dependencies, or performance. In addition to the OMB budget documents, the agency’s list of 21 mission- critical systems and modernization initiatives did not fully identify FDA’s IT systems. Agency officials acknowledged that this list was partly derived from a list of enterprisewide systems discussed in our prior (June 2009) report and did not include all systems. For example, while the list did include some of the regulatory systems critical to CFSAN’s mission, such as MARCS, the FDA Unified Registration and Listing System, and the Low-Acid Canned Foods system, the list did not include other systems identified by the centers as critical to their missions. Among these, the list did not include information on two of three mission-critical systems belonging to the Center for Drug Evaluation and Research: the Document Archiving, Reporting and Regulatory Tracking System, which tracks drug applications; and the Electronic Drug Registration and Listing System, which automates drug firm registrations and implements unique identifiers for all firms. Further, FDA’s list did not include the key regulatory and administrative systems used by CFSAN—the CFSAN Adverse Events Reporting System and the Food Applications Regulatory Management system—both of which were identified on the exhibit 53 to OMB. According to FDA’s CIO, the agency is in the process of reviewing IT projects of over $5 million and identifying potential improvements in its capital planning and investment control process to increase insight into the IT portfolio. However, the CIO and a senior technical advisor could not say when the comprehensive list of systems would be finalized. Until the agency has a comprehensive inventory of its IT assets, it will lack the information needed to ensure that it is identifying the appropriate mix of investments that best meet its needs and priorities. Further, lacking such an inventory, the agency substantially diminishes its ability to provide a full picture of the current state of its investments, its vision of the future, and its plan for getting there. FDA has completed several projects aimed at, among other things, modernizing its IT infrastructure and administrative processes. These projects include a data center migration and consolidation effort and efforts aimed at standardizing data across systems. The agency has also nearly completed one major mission-critical system modernization project that provides capabilities supporting its regulatory mission. Nevertheless, much work remains on FDA’s largest mission-critical system modernization project, MARCS, and a lack of adequate planning, among other things, makes it uncertain when or if it will meet its goals of replacing eight key legacy systems and providing needed functionality. In addition, FDA has not yet fully implemented key IT management capabilities to guide and support its modernization effort, such as IT strategic planning, enterprise architecture development and implementation, and IT human capital planning. FDA has completed a major effort to modernize its IT operations and infrastructure by consolidating its data centers. Specifically, the ICT21 data center modernization and migration effort replaced the agency’s aging data center infrastructure with modern equipment and consolidated its data centers. The effort began in 2008 and was completed in 2011. According to FDA, this effort provided the foundation for modern, networked information and shared data resources and positioned the agency to tackle the challenges of building the next generation of application systems and software tools. FDA officials further noted that the new data centers provide users with greater access to information, having decreased unscheduled system downtime, and that the centers have formalized and standardized the agency’s development, test, and production environments to improve operations. FDA has also nearly completed one of its major enterprisewide mission- critical systems modernization efforts—Medwatch Plus—which is estimated to cost about $56 million. Medwatch Plus is to provide a reporting portal for the public to submit adverse event reports as well as the capability to create reports to inform the public of safety problems. FDA receives more than 600,000 voluntary postmarketing adverse event reports annually from manufacturers, health care professionals, and consumers for all FDA-regulated products, many of which are submitted as paper reports. According to the agency, the portal provides a user- friendly electronic submission capability, encouraging the reporting of information in a quality and uniform manner. In May 2010, FDA reported that the agency had deployed the Electronic Safety Reporting Portal. This website can be used to report safety problems related to foods, including animal feed and animal drugs, as well as adverse events occurring on human gene transfer trials. According to officials, the project was in operations and maintenance, and the agency’s project documentation reported that the project will be enhanced to reflect recent legislation. Another part of the Medwatch Plus project, the FDA Adverse Event Reporting System is to provide tools for the analysis of adverse events and safety report information. According to FDA, the system will enable the agency to improve the timeliness, accuracy, and usability of its product safety surveillance data by significantly reducing delays and errors associated with manual data entry and coding of paper reports. The system is initially being developed for the analysis of drug and biologic products. FDA estimates that the FDA Adverse Event Reporting System will be deployed in 2012. While FDA has made important progress toward completing ICT21 and Medwatch Plus, considerable work remains to complete the MARCS program. Initiated in 2002, the program is one of the agency’s largest and costliest system efforts, receiving $37 million of FDA’s 2011 modernization and operations funding and having a total estimated cost of $280 million. The need for MARCS arose from problems experienced with FDA’s critical compliance systems, such as OASIS. According to the Program Manager, these and other ORA systems were developed in a stove-piped manner, and thus did not easily interface with other FDA systems in place or being developed. Specifically, the Program Manager noted that, while it is not impossible, it is expensive and difficult to develop these interfaces. As a result, FDA employees did not have immediate access to needed information and often had to make time-consuming efforts to locate the information manually or in other systems. The MARCS program is intended to support ORA’s critical work of safeguarding food, drugs, medical devices, biologics, and veterinary products that the agency regulates. By enhancing existing applications and developing new systems, it is to provide information to headquarters and field users to perform inspections, compliance activities, and laboratory operations. Specifically, it is to automate the workflow and help track and manage information about firm compliance with FDA’s regulations. In addition, the program is also intended to be used by other federal, state, and industry users to help support FDA’s public health mission. For example, the program is expected to provide improvements in interfacing and exchanging data with U.S. Customs and Border Protection to inspect products imported into the United States. Further, the program is intended to eliminate FDA’s existing stove-piped databases to provide automated data and sharing among domestic and foreign inspections. In this regard, FDA plans to update and replace eight key ORA systems that facilitate FDA’s compliance activities. However, despite its importance to FDA’s overall modernization efforts, much of the planned functionality has not been delivered, and FDA has yet to retire the legacy systems MARCS was intended to replace. A series of rebaselines and changes to accommodate short-term needs resulted in repeated shifts in the approach and revisions to the target dates for completing the program: Since 2002, when the program was initiated, requirements were changed and broadened to include the replacement of six additional legacy systems from the two originally planned. In 2005, development was put on hold, and efforts and funding were redirected toward FDA’s data center modernization effort and toward providing web-enabled versions of the two original legacy systems, OASIS and FACTS. The program was rebaselined in 2006, 2007, and 2009 to accommodate additional cost or functionality and the replacement of additional legacy systems. According to FDA, in 2010, the agency updated and revalidated MARCS requirements. In August 2011, FDA again rebaselined the MARCS program estimates to account for new legislative and resulting regulatory requirements based on the FDA Food Safety Modernization Act. It estimated that the total life-cycle cost would be $282.7 million and planned to deploy a significant portion of MARCS and retire its legacy systems by July 2014. (For a history of MARCS see app. V.) Nonetheless, as of February 2012, FDA still had considerable work to accomplish on MARCS. While the agency deployed a tool—the Predictive Risk-based Evaluation for Dynamic Import Compliance Targeting (PREDICT)—to improve the efficiency of the inspection process through targeting high-risk imports, FDA had not yet been able to retire any of the eight legacy systems MARCS was intended to replace. Further, of the approximately 30 planned service components, processes, of the program, only 8 were in the implementation or operations and maintenance phases, while the remaining 22 were in earlier phases, such as requirements analysis. Of these 22, FDA had yet to begin work on 12 components. Figure 3 shows the life-cycle phases components as provided by FDA. While FDA noted that there are 37 components, for the purpose of reporting status, the agency grouped 6 components into the Field Work Manager component and 3 into Work Assignment and Accomplishment Management Services, resulting in 30 total components. FDA follows HHS’s Enterprise Performance Life Cycle Framework, in which projects pass through 10 life-cycle phases: initiation, concept, planning, requirements analysis, design, development, test, implementation, operations and maintenance, and disposition. One critical management tool to effectively determine work remaining of complex systems that involve the integration of a number of components is having a reliable IMS that is used to monitor all of the program’s work activities, how long the activities will take, and how the activities are related to one another. The IMS is a top-level schedule that is linked to lower-level schedules that define all of the tasks necessary to complete the project, including work to be performed both by the government and contractors, and that includes all tasks for the life cycle of the project. As such, the IMS provides both a roadmap for systematic execution of a program, and a means by which to gauge progress. It is a critical tool for determining what work remains and the expected cost to complete it and for identifying and addressing potential problems. While the Program Manger provided a fiscal year 2011 schedule and multiple 2012 subproject schedules, these documents lacked key information that is required in an IMS. Specifically, the fiscal year 2011 schedule does not identify all current and future tasks for the program, and does not reflect the work to be performed by the government as well as the contractor. The schedule reflects activities through fiscal year 2012, but lacks key information on the program’s milestones and schedules for the rest of the project, which runs beyond fiscal year 2014.fiscal year, which does not identify the full scope of the project. Further, the schedule is based on tasks and lower-level schedules of the integration contractor and does not include tasks to be performed by the government. As a result, it does not have the key capability to provide a summary of progress on all lower-level tasks or of the effects of changes to lower-level schedules and tasks on the overall project. Thus, it cannot be used to gauge progress on the entire project and evaluate the effect of changes to individual tasks on the project as a whole. Consequently, FDA is only projecting work through the current Instead of an IMS, the MARCS contractor program manager noted that FDA and the contractor are using separate schedules to manage the work and are coordinating their schedules at biweekly meetings. FDA officials also told us that they had not developed a detailed schedule of future tasks because there are many unknowns, including funding availability and changes to functionality needed as a result of legislation such as the FDA Food Safety Modernization Act. While our cost estimating guide says that a comprehensive schedule should reflect all activities for a project, it recognizes that there can be uncertainties and unknown factors in schedule estimates due to, among other things, In response to such uncertainties and unknowns, the limited data. guidance discusses the need to perform a schedule risk analysis to determine the level of uncertainty and to help identify and mitigate the risks. GAO-09-3SP. rescoped version of the program, FDA increases the risk that it will be unable to successfully execute all activities needed to complete the program, resulting in additional delays in delivering improved functionality and retiring legacy systems. An agency’s chance of success in modernizing its IT systems, particularly for large and costly programs such as MARCS, is improved if it institutes key IT management capabilities. However, FDA has not fully established key IT management capabilities including IT strategic planning, enterprise architecture, and IT human capital planning. As the agency undertakes its modernization initiatives, an IT strategic plan should serve as the agency’s vision or roadmap and help align its information resources with its business strategies and investment decisions. Further, an enterprise architecture can provide a blueprint for the modernization effort by defining models that describe how an organization operates today (the “as-is” state), and how it intends to operate in the future (the “to-be” state), along with a plan for transitioning to the future state. In addition, strategic human capital planning is essential to ensuring that an organization has the right number of people with the right mix of knowledge and skills to achieve current and future program results. Until FDA establishes these capabilities, successful completion of its modernization efforts is in jeopardy. As we have previously reported, IT strategic plans serve as an agency’s vision or roadmap and help align its information resources with its business strategies and investment decisions. Further, such a plan is an important asset to document the agency’s vision for the future in key areas of IT management, including enterprise architecture development and human capital planning. Among other things, the plan might include the mission of the agency, key business processes, IT challenges, and guiding principles. Further, a strategic plan is important to enable an agency to consider the resources, including human, infrastructure, and funding, that are needed to manage, support, and pay for projects. For example, a strategic plan that identifies what an agency intends to accomplish during a given period helps ensure that the necessary infrastructure is put in place for new or improved capabilities. In addition, a strategic plan that identifies interdependencies within and across individual IT systems modernization projects helps ensure that the interdependencies are understood and managed, so that projects—and thus system solutions—are effectively integrated. FDA does not have an actionablegoals and corresponding tasks to guide its overall modernization efforts, although our June 2009 report recommended that it develop one. While the agency drafted an IT strategic plan in May 2010, this plan has not been completed or approved by agency executives. A senior technical advisor stated that the plan was not sufficiently detailed or actionable and the agency is revising and updating the plan. However, the official was unable to provide details on when it would be finalized or available for review. In January 2012, FDA’s CIO stated that the agency was undertaking an extensive effort to collect feedback to inform a strategic direction. IT strategic plan that identifies specific Our prior report recommended that FDA develop an IT strategic plan that includes results-oriented goals, strategies, milestones, and performance measures and use this plan to guide and coordinate its modernization projects and activities. recommendation, FDA will lack a comprehensive picture of the goals of its efforts and the strategies that will be used to meet them. Consequently, FDA risks proceeding with IT modernization efforts that are not well planned and coordinated, that are not sufficiently aligned with the agency’s strategic goals, and that include dependent projects that are not synchronized. GAO-09-523. describing in detail the steps to be taken and tasks to be performed in managing the enterprise architecture program, including a detailed work breakdown and estimates for funding and staffing. When planning IT modernization, a to-be enterprise architecture provides a view of what is planned for the agency’s performance, business, data, services, technology, and security architectures, and is supplemented with a plan for transitioning from the as-is to the to-be state. This is critical in order to coordinate the concurrent development of IT systems in a manner that increases the likelihood that systems will be able to interoperate and that they will be able to use the IT infrastructure that is planned going forward. In addition, organizations can develop an architecture in segments— referred to as a segment architecture—that correspond to business areas or domains in order to divide the development process into manageable sections. According to the Federal Enterprise Architecture Practice Guidance, prioritizing segments should precede building them, and developing the segment architecture should take place before an agency executes its IT projects for a segment. Attempting to define and build major IT systems without first completing either an enterprisewide architecture or, where appropriate, the relevant segment architectures, is risky. We reported in 2009 that FDA had made mixed progress in establishing its enterprise architecture and that the agency did not yet have an architecture that could be used to efficiently and effectively guide its modernization efforts. Since then, the agency’s enterprise architecture has remained incomplete. Specifically, the agency has developed a draft enterprise architecture management plan; however, according to FDA’s Chief Enterprise Architect, the plan needs to be rewritten to reflect recent guidance from OMB and HHS, as well as the new CIO’s vision. In addition, the plan does not address all the elements called for by GAO’s enterprise architecture management maturity framework, such as identifying needed funding and staff resources. The Chief Enterprise Architect estimated that the revised enterprise architecture management plan would be completed in April 2012. Further, FDA has not completed its as-is architecture, particularly in describing its current environment in terms of technology, performance, and security; nor has FDA completed its to-be architecture by describing, for example, desired end-to-end business information flows, or developed an enterprise architecture transition plan. FDA has developed architecture products that describe aspects of the as-is enterprise architecture in terms of business processes, information, and IT systems. For example, it has drafted a graphical high-level view of FDA’s business process hierarchy, which shows the core mission processes, mission-enabling processes, and IT capabilities; and has produced a report of current FDA information exchange packages and identified data standards. However, FDA’s architecture products do not adequately describe its as-is environment in terms of technology, performance, and security. For example, although FDA has defined a high-level technical standards review process and identified certain as-is technology products, it has not described enterprise-level as-is technology infrastructure assets, such as common application servers and communications networks that currently support enterprise application systems and services; and FDA’s architecture products do not describe enterprise-level as-is performance issues and security concerns. These descriptions are important since they provide a basis for making decisions on enterprise investments and developing an enterprise transition roadmap. FDA has developed an initial draft of its target enterprise architecture that describes aspects of its to-be environment. The target enterprise architecture is defined in terms of business needs, information, services, technology, and security. For example, it identifies business functions (e.g., facility inspection) performed by FDA, the classes of data (e.g., facility inspection data) used by the business functions (e.g., product review and approval), and the types of technology infrastructure (e.g., enterprise service bus) used across FDA. The target enterprise architecture also includes a technical reference architecture diagram that identifies logical groupings of services and a services integration framework. Nonetheless, the target architecture does not adequately describe FDA’s to-be environment. For example, the target architecture does not include to-be end-to-end business information flows that identify the information used by FDA in its business processes, where the information is needed, and how the information is shared to support mission functions. These artifacts are necessary to help FDA identify process gaps and information-sharing requirements among its business functions, data centers, and systems; across business segments; and with external business partners (e.g., life sciences companies and food companies). Moreover, it does not identify enterprise policies for the way information is acquired, accessed, shared, and used within FDA and by its business partners. Further, it does not describe common application components and reusable services expected to be leveraged by all segments and identify as-is cross-agency applications that are expected to be part of the target environment. In addition, the FDA target architecture does not include performance measures that focus on the long-term performance of the entire agency and performance targets established for all key business processes and agency services. This information is important since it establishes a basis for defining the expected performance of related segments and the technical performance of the supporting application systems and services. Moreover, FDA has not adequately described its to-be environment in terms of technology. For example, although the Chief Enterprise Architect indicated that cloud computing services and solutions would be adopted for sharing information internally and externally, the architecture does not yet provide the timelines for transitioning to cloud computing and identify what databases, services, and platforms are to take advantage of cloud-based services. Further, FDA has completed only 1 of 12 architecture segments that will make up its enterprise architecture, and continues to conduct modernization and system development efforts for segments it has not completed. Finally, FDA has not developed plans that address the risk of proceeding with modernization projects in the absence of a complete architecture. We previously recommended that FDA accelerate development of its segment and enterprise architecture, including the as-is and to-be architectures and the associated transition plan. As long as its enterprise architecture and segment architectures lag behind its modernization projects, FDA increases the risk that its modernization projects will not conform to its planned environment and that the IT solutions that it pursues will not be defined, developed, and deployed in a way that promotes sharing and interoperability, maximizes shared reuse, and minimizes overlap and duplication. Finally, without a plan to address risks associated with an incomplete target architecture and transition plan, there is no assurance that appropriate actions will be taken, including risk identification and prioritization, risk response, and risk monitoring and control. The success or failure of federal programs, like those of other organizations, depends on having the right number of people with the right mix of knowledge and skills. In our prior work, we have found that strategic human capital management is essential to the success of any organization. Strategic human capital management focuses on two principles that are critical in a modern, results-oriented management environment: People are assets whose value can be enhanced through investment. An organization’s human capital approaches must be aligned to support the mission, vision for the future, core values, goals and objectives, and strategies by which the organization has defined its direction. For example, our prior work has shown negative cost and schedule implications for complex services acquisitions at the Department of Homeland Security that did not have adequate staff. See GAO, Department of Homeland Security: Better Planning and Assessment Needed to Improve Outcomes for Complex Service Acquisitions, GAO-08-263 (Washington, D.C.: Apr. 22, 2008). long-term goals), analyzing the gaps between current skills and future needs, and developing strategies for filling gaps. However, FDA has not adequately planned for its human capital needs, although our June 2009 report recommended that it do so. Our prior review found that the agency had not inventoried the skills of its IT workforce, determined present or future skills needs, or analyzed gaps. Since our prior review, the agency has made limited progress in assessing its IT human capital needs. In March 2010, FDA reported the results of its workforce assessment of OIM’s Division of Systems. The report documented current workforce characteristics based on a survey of Division of Systems employees and recommended steps for the division to better align its functions and responsibilities with the needs of the centers. However, the survey was limited to only one of OIM’s five divisions (Division of Systems Management), and did not consider work performed by contractors. Further, while the assessment identified staff concerns with their ability to perform current and future tasks, it only provided a snapshot of current capabilities, and did not include an estimate of skills and resources needed to perform future work or an assessment of whether the skills and abilities of the current workforce are sufficient to meet future needs. In August 2011, the agency reported on a more comprehensive study of IT staff skills and resource allocations. This study was also, in part, based on a survey of OIM’s IT staff, and it included all five of OIM’s divisions. However, the study was focused on current workload information and included staff’s self-reported estimates of calendar year 2010 hours and a prediction of 2011 hours for IT functional areas. The study was not based on an assessment of needs to achieve future IT plans. Further, the study did not include a gap analysis based on future IT plans. Thus, FDA has yet to conduct a full assessment of future needs, and develop a plan to address them. When asked about additional plans to address the gaps in its IT human capital planning, the Acting Chief Operating Officer said that further IT human capital assessments and planning would not occur until the new CIO could be briefed on the assessments that have been performed to date and the findings. The CIO stated that workforce modernization is one of the most critical needs for FDA to effectively meet its future IT goals. According to the CIO, each of FDA’s operating divisions was in the process of identifying the skill sets needed to replace OIM staff that departed the agency. The CIO cited shortages in staff that have experience building clinical data warehouses—a critical agency need. The CIO also stated that the agency’s IT staff skills have been limited by inadequate training and added that FDA plans to fill the agency’s human capital gaps through obtaining external expertise and internal development. However, without a human capital plan to guide these efforts, FDA risks not obtaining the right number of people with the right mix of skills to meet its goals. Moreover, beyond deficiencies in its staff skill sets and inadequate training, the agency’s ability to manage IT has also been hindered by changes in leadership. Since 2008, the agency has had five CIOs, potentially hampering its ability to plan and effectively implement a long- range IT strategy. For example, the agency had two acting CIOs during 2011, with a permanent CIO only being selected recently (in October 2011). According to the former Acting CIO, FDA filled positions with acting officials in order to address specific goals. For example, in March 2011, he was moved from his position as OIM Director of IT Infrastructure to the acting CIO position because FDA considered his expertise essential to completing the data center consolidation effort. However, without a CIO with a broad view of IT strategic goals, the agency was unable to focus on its longer-term objectives. Further, this has led to planning delays in key areas such as IT strategic planning, enterprise architecture development, and human capital management. In September 2011, for example, the agency’s Chief Operating Officer said that IT human capital plans were on hold until the new CIO was in place. We noted previously that one element that influences the likely success of an agency CIO is the length of time the individual in the position has to implement change. For example, our prior work has noted that it can take 5 to 7 years to fully implement major change initiatives in large public and private sector organizations and to transform related cultures in a sustainable manner. In our previous review of FDA’s modernization efforts, we recommended that the agency develop a human capital plan that includes an assessment of skills, determines needs, and analyzes gaps. Until the agency does so and maintains stable leadership to guide its efforts, the agency risks not having adequate management and staff in key areas necessary to effectively manage its IT modernization efforts. Data sharing is critical for FDA to effectively carry out its mission. As previously noted, the agency needs timely access to data to be able to support its product review and approval process, its inspection of imports and manufacturing facilities, and its postmarket surveillance activities. Further, the agency needs to collect data from and share them with a wide array of partners, including public health organizations, importers, and other federal entities, as well as the general public. Specifically, it needs standardized data to effectively compare information of thousands of drug studies and clinical trials. Both we and the HHS Inspector General have previously identified challenges, such as inconsistent naming conventions, in the agency’s ability to share information, both internally and with external partners. FDA has taken some steps to improve its sharing of data, but much more remains to be done. Specifically, the agency has several initiatives under way to more effectively share its data, including adopting an enterprisewide standard for formatting data, and several projects aimed at enhancing its ability to share data, both internally and with external partners. However, these projects have made mixed progress, and more significant work remains for FDA to fully implement standardized data sharing across the agency. Data standardization includes ensuring that information is submitted and stored in a consistent format using consistent terminology. Developing systems based on the use and enforcement of data standards helps ensure that information collected is complete and consistent and that users of the data exchanged have a common understanding. The ultimate benefit of standardizing data is to make it easier to collect, compare, maintain, and analyze. FDA has made progress in one significant initiative aimed at achieving more effective sharing of data: its adoption of an enterprisewide data standard that can be applied to food, drugs, and medical devices. Specifically, it has adopted an HL7 international health care informatics interoperability standard as its enterprisewide data model. The standard that the agency has adopted—Reference Information Model, HL7 version 3—provides a set of rules that allow information to be shared and processed in a uniform and consistent manner. For example, it specifies formats for presenting the names of firms or products, descriptions of disease symptoms, or the gender of a patient (e.g., “M” or “Male”). This standardization of data formats should help ensure consistency in how information on products is submitted to FDA; it also should facilitate analysis of the data by making it easier to compare information across products or to identify patterns in large numbers of data (i.e., data mining). As such, it should provide the foundations for FDA’s efforts to standardize data enterprisewide. FDA is applying this standard to multiple categories of products, including food, drugs, and medical devices, in order to facilitate the input, reading, and comparison of information on applicable products submitted to the agency for approval. For example, it has established an Electronic Submissions Gateway, which provides a virtual “mailbox” that accepts submissions of drug studies and other information. In addition, the gateway has an HL7 screening capability that reviews submissions to ensure that they meet FDA’s data standards. This could facilitate the drug companies submitting data to ensure the information is consistent with the required standard. However, according to the agency, currently only about 60 percentclinical trial data is being submitted electronically, with the remainder being submitted on paper. The amount of paper submissions hinders the agency’s development and implementation of standardized data for electronic submission. The adoption of electronic submission continues to be limited because its use is voluntary, in that submitters can choose to use the older paper format that does not conform to the data standards. FDA officials said they are promoting electronic submission of applications and reports by educating submitters on the benefits of electronic submissions. In addition to its adoption of an enterprisewide data standard, FDA has developed an approach to standardizing firm registration data that it receives in a nonstandard format. consistency in data on firms, agency officials acknowledged that there is considerable work remaining to implement data standardization across the agency. Moreover, these officials stated that acquiring the staff with needed expertise in areas such as data modeling remains a challenge. For example, FDA is developing a wide array of standards in collaboration with industry representatives to evaluate and reach agreement on how these standards will be implemented and adopted. In addition to its adoption of the HL7 data standard, FDA has several initiatives that are intended to enhance the sharing of data throughout the agency. Of four such initiatives, two are in the mixed phase of development, one is in an early stage of development, and the other is on hold pending a reevaluation. Table 2 shows the progress these projects have made since 2009. The Firms Master List Services standardizes and validates the facility name and address data received from imports, registration and listing systems, and inspections. The Firms Master List Services is used by MARCS and Automated Laboratory Management. Janus was intended to provide FDA with a comprehensive clinical-trial and population-health-data warehouse and analytical tools to enable reviewers to search, model, and analyze data, improving FDA’s management of structured scientific data. However, since 2009, this project has only progressed from the planning to the requirements phase. According to the CIO, the project’s requirements became too extensive and limited progress was being made in developing the data warehouse. The CIO further noted that FDA did not have the needed expertise for a project this size and scope, and further work has been stopped pending reevaluation. Further, the CIO said that when the project is restarted, the agency will use an Agiledevelopment approach to provide added capabilities incrementally over shorter timeframes to more effectively manage the project. OMB and the Federal CIO Council guidance state that agencies should analyze their business and information environments to determine information-sharing requirements and identify improvement opportunities.information sharing within the agency and other government agencies. Further, OMB guidance requires federal agencies to analyze the information used in their business process to indicate where the information is needed and how it is shared to support mission functions. Documenting information flows is an initial step in developing systems and databases that are organized efficiently, are easier to maintain, and meet the user’s needs. The agency’s enterprise architecture should demonstrate However, we have previously identified deficiencies in CFSAN’s ability to effectively share information, such as information on recalls of contaminated foods. In particular, CFSAN has 21 different databases and systems that contain information critical to its mission. (See app. VI for details on the center’s systems.) These databases and systems contain information on adverse events; seafood inspection; milk shippers; shellfish shippers; retail food safety inspections; toxicological effects of food ingredients and additives; and FDA research on food, animal feed, veterinary medicine, and cosmetics, among others. The center now has data-sharing initiatives under way, but it has not performed a comprehensive review to identify opportunities for improved data sharing within the center. CFSAN has conducted some work to improve the sharing of data among these systems and databases. For example, according to the agency, the center has plans for a web-based application designed to standardize vocabularies across systems and enable enterprisewide searching of its disparate data collections. Nonetheless, the center has not comprehensively assessed its information-sharing needs and capabilities to identify further opportunities for data sharing and system integration. This would examine how information moves between business processes and identify efficiencies that could be gained by grouping related information into corresponding databases. Instead, the center has identified opportunities for data sharing on an ad hoc basis, relying primarily on the expertise of its staff. CFSAN officials acknowledged that integration among its databases could be improved to more effectively share data and streamline processes. For example, certain firms are currently required to access two separate databases to complete the low-acid canned foods registration process. Further, officials noted that the center’s systems were generally created in response to a specific need or legislation and are thus stove-piped, with little overlap of information. However, without identifying opportunities for greater and more efficient information sharing, FDA and CFSAN face a risk of continuing to maintain an IT environment that requires greater effort to access needed information. While FDA has taken several important steps toward modernizing its IT environment, much remains to be done, and these efforts have not been guided by key foundational IT management practices, which expose them to significant risk. Specifically, because FDA does not have a comprehensive list of its systems, it cannot ensure that it is investing in the mix of projects that will best support its mission and that it is managing them appropriately. Further, while FDA has taken foundational steps for IT modernization—including consolidating and updating its data centers and completing modernization projects for some IT systems— FDA has experienced ongoing delays and changes of direction to the MARCS program, one of its largest systems modernization efforts. This state of flux is exacerbated by the lack of an IMS for the program, resulting in uncertainty about when, or if, the planned functionality will be delivered and the ORA legacy systems retired. Compounding these concerns, FDA has yet to establish key IT planning and management disciplines that remain essential for carrying out a successful modernization effort. Without an actionable IT strategic plan, a complete enterprise architecture, and attention to its IT human capital needs, FDA will continue to be challenged in completing its modernization efforts. If implemented, our previous recommendations to establish these IT capabilities could help FDA successfully carry out these efforts. Finally, while FDA has taken important steps to improve its sharing of mission- critical data, until CFSAN conducts a full assessment of its data-sharing needs it may be missing opportunities for increased efficiencies and a reduction in duplication and unnecessary effort. While the agency’s new CIO is reassessing several aspects of FDA’s modernization program, it remains crucial that any future efforts are guided by rigorous and disciplined planning and management. To help ensure the success of FDA’s modernization efforts, we are recommending that the Commissioner of FDA direct the CIO to take the following four actions: Take immediate steps to identify all of FDA’s IT systems and develop an inventory that includes information describing each system, such as costs, system function or purpose, and status information, and incorporate use of the system portfolio into the agency’s IT investment management process. In completing the assessment of MARCS, develop an IMS that identifies which legacy systems will be replaced and when; identifies all current and future tasks to be performed by contractors and FDA; and defines and incorporates information reflecting resources and critical dependencies. Monitor progress of MARCS against the IMS. Assess information-sharing needs and capabilities of CFSAN to identify potential areas of improvements needed to achieve more efficient information sharing among databases and develop a plan for implementing these improvements. HHS provided written comments on a draft of this report, signed by the Assistant Secretary for Legislation (the comments are reproduced in app. II). In its comments, the department neither agreed nor disagreed with our recommendations but stated that FDA has taken actions to address many of the issues in our report. In its comments, HHS stated that FDA’s initiative to modernize its IT infrastructure comprises multiple phases. The first phase includes the data center modernization effort, which the department stated has provided FDA with an advanced computing infrastructure and a production data center with a secure computing environment. According to HHS, this infrastructure modernization and consolidation effort serves as the foundation for all other transition activities, and positions FDA to move forward with the second phase: implementing data center operation management and service contract efficiencies while working on modernizing and consolidating software systems with similar business processes and expediting the retirement of legacy systems. Our report recognizes the progress that FDA has made in modernizing its data center infrastructure, and we agree that this effort is a key component of the agency’s overall modernization initiative. However, as we also noted, over the last decade—and concurrent with its data center modernization effort—FDA has spent tens of millions of dollars on software systems modernization projects that were intended to provide updated functionality and enable the retirement of legacy systems. In particular, FDA spent approximately $160 million from fiscal year 2002 to fiscal year 2011 on MARCS, yet it has repeatedly delayed milestones for delivering capabilities and retiring legacy systems. Moreover, this spending on system development and modernization has occurred in the absence of fully implemented IT management capabilities such as an IT strategic plan, a complete enterprise architecture, and a strategic approach to IT human capital, as well as an IMS for MARCS. HHS also identified several recent efforts that it stated will address issues we raised in our report: FDA’s senior executive team (which includes the CIO) has committed to governing the agency’s IT portfolio. As part of these responsibilities, the team has conducted sessions to identify the top 5 to 10 capabilities that are needed for the agency to meet the challenges of operating in a globalized regulatory environment. Further, to assist in the management of IT investments, FDA’s Office of Information Management is in the process of establishing a new Project Management Office to provide effective services aligned with the agency’s strategic priorities. FDA has initiated several large program or project reviews to identify areas for improvement, potential for streamlining, and projects that should be stopped, continued, or started. Specifically, FDA has evaluated, and halted, the Janus project, and is conducting a detailed review of MARCS. The agency is also revising its draft IT strategic plan and working to define and implement its enterprise architecture. FDA is assessing its IT workforce in Office of Information Management divisions to identify skill-set gaps, develop staff training plans, and identify resource needs. The agency stated that it has set aside training dollars and approved staff training plans, but acknowledged that workforce development activities must be a recurring process in order to ensure its skills keep pace with evolving technologies and methodologies. Further, the agency stated that FDA is committed to placing permanent leadership in all remaining acting positions that report directly to the CIO. Specifically, FDA has posted and closed job vacancy announcements for these positions and is evaluating applicants. As noted in our report, we recognize and support these efforts, many of which have been initiated by the recently hired CIO. The success of these efforts could be enhanced by FDA’s full implementation of the recommendations that we have made in this report and in our 2009 report. Finally, with regard to our recommendation that FDA develop an IT systems inventory that includes information describing each system— such as costs, system function or purpose, and status information—and incorporate use of the system portfolio into the agency’s IT investment management process, FDA provided an inventory of systems after we sent the draft report for review. This inventory included information on 282 IT systems, but did not provide all key information, such as cost and status. Moreover, agency officials stated that the inventory had not yet been validated for completeness and accuracy. HHS also provided technical comments on the report, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Commissioner of the Food and Drug Administration, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions on matters discussed in this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix VII. Our objectives were to (1) assess the Food and Drug Administration’s (FDA) current portfolio of information technology (IT) systems, including the number of systems in use and under development, and their purpose and costs; (2) assess the status and effectiveness of FDA’s efforts to modernize the mission-critical systems that support its regulatory programs; and (3) examine the agency’s progress in effectively integrating and sharing data among key systems. To assess FDA’s portfolio of IT systems, we reviewed documentation identifying key systems and major modernization initiatives, the Office of Management and Budget’s (OMB) exhibit 300s and exhibit 53s, and a list of FDA’s mission-critical systems. We evaluated FDA’s list of IT systems and modernization initiatives and assessed it against OMB guidance and GAO’s IT investment management framework. We reviewed the agency’s budget submissions and the investments listed for fiscal year 2011 exhibits 53 and 300 and compared them to other agency documentation providing systems’ descriptions. We interviewed agency officials responsible for developing a portfolio of IT systems and the Chief Information Officer (CIO) to assess the agency’s plans for identifying improvements in its process of identifying and overseeing a comprehensive IT portfolio. Department of Health and Human Services, Enterprise Performance Life Cycle Framework (Washington, D.C.: September 2011). data reflected on the agency’s federal IT Dashboard. Further, because Mission Accomplishments and Regulatory Compliance Services (MARCS) was one of the agency’s largest and costliest mission-critical modernization efforts and was considered essential to the Office of Regulatory Affairs’ (ORA) compliance activities, we evaluated the project’s status and whether the effort is following best practices. Specifically, we assessed the program’s documentation, including agency plans, schedules, and contractor statements of work, as well as various components and interviewed relevant project managers and technical specialists. We compared FDA’s schedules with best practices for developing an integrated master schedule to plan and manage the effort. We also evaluated FDA’s progress in addressing our prior recommendations related to FDA’s implementation of key IT management practices: IT strategic planning, enterprise architecture, and IT human capital planning. To do so, we looked at whether policies or processes were in place for IT investment management, human capital, and enterprise architecture. We based our analysis on three frameworks: our IT investment management framework, our framework for strategic human capital management, and our enterprise architecture management maturity framework. The IT investment management framework provides a rigorous standardized tool for evaluating an agency’s IT investment management processes and a roadmap agencies can use for improving their investment management processes. The framework for strategic human capital management lays out principles for managing human capital. We evaluated FDA’s policies and procedures against this framework. The enterprise architecture management maturity framework describes stages of maturity in managing enterprise architecture. Each stage includes core elements, which are descriptions of a practice or condition that is needed for effective enterprise architecture management. We evaluated FDA’s implementation of four core elements from stage 2 (Creating the Management Foundation for Enterprise Architecture Development and Use). We did not perform a complete enterprise architecture management maturity framework assessment, and we did not audit specific IT projects to analyze how well the policies and procedures were implemented. To supplement the framework criteria, we used criteria from the Federal Enterprise Architecture Practice Guidance issued by OMB and compared FDA’s progress on its architecture with these criteria. To determine the agency’s progress in effectively integrating and sharing data among key systems, we reviewed project plans, schedules, and other documents describing FDA’s efforts to implement Health Level Seven (HL7) data standardization for the exchange and analysis of information. We also assessed the progress of modernization initiatives aimed at improving standards and data sharing. Specifically we assessed FDA’s modernization initiatives by comparing the Enterprise Performance Life Cycle stage of the projects from 2009 with the project stages in 2012. We selected FDA’s Center for Food Safety and Applied Nutrition (CFSAN) to assess sharing across databases supporting FDA’s regulatory mission because of previously identified deficiencies in specific functions, such as sharing on recalls of contaminated foods. We analyzed the number of CFSAN databases, their purposes, and corresponding IT systems used, and assessed the efforts and methodology used by the center to improve information sharing and exchange between databases against OMB and Federal CIO Council enterprise architecture guidance. We supplemented our analysis with interviews of the agency’s CIO, Chief Technology Officer, Chief Enterprise Architect, Senior Technical Advisor, and other relevant IT managers regarding management of FDA’s IT portfolio, the status of and plans to modernize key systems such as MARCS, shortfalls in mission-related systems, IT strategic and human capital planning, status of enterprise architecture development, and efforts to improve interoperability of systems that support FDA’s regulatory mission. In addition, we visited FDA facilities at the Port of Baltimore in Baltimore, Maryland, to observe a demonstration of new capabilities to screen imports. We requested and received documentation from FDA on its agencywide modernization projects, including descriptions of their purpose and project summary status reports showing their expected completion dates and other milestones. We conducted this performance audit primarily at FDA’s headquarters in White Oak, Maryland, from March 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FDA provided us with the following list of 21 mission-critical systems and modernization initiatives in response to our request for the agency’s IT portfolio. The following table provides details on FDA’s IT investments, as described in the agency’s fiscal year 2013 exhibit 53 submission. FDA began the MARCS effort in 2002, and since that time has made several shifts in its approach. At that time, ORA envisioned that the program would replace its two key legacy systems, the Operational Administrative System for Import Support (OASIS) and the Field Accomplishments and Compliance Tracking System (FACTS). Since 2002, the program’s requirements were changed and broadened to include replacement of six additional legacy systems. In April 2005, FDA developed a design that envisioned a set of integrated service components intended to provide the applications and tools to support the agency’s import operations, field operations, compliance operations, firm management, workload management, and selected aspects of laboratory operations. The agency estimated that development would cost about $75 million and be completed in 2008. However, later in 2005, a decision was made to put the current vision for the program on hold, and instead implement web-enabled versions of OASIS and FACTS. According to an Office of Information Management (OIM) supervisory IT specialist, the migration to web-enabled systems allowed the agency to implement single sign-on and enabled the legacy systems to integrate more easily with new functionality. According to the Program Manager and contract officials, the decision to implement web- enabled versions was also motivated by vendor plans to halt support for the current OASIS and FACTS platform and uncertainty about funding for the program. In April 2006, FDA rebaselined the program estimate to include development costs and maintenance costs for the entire program life cycle. FDA estimated that the total life-cycle cost would be $221.4 million, and the investment would end in August 2019. It estimated that development would cost $113.8 million, and most development would be complete by November 2012. According to the Program Manager and contract officials, between 2006 and 2009, FDA’s work included the following: In 2006, migration of OASIS and FACTS to a web-enabled version was completed. In May 2007, the program was rebaselined again with a slight increase in development costs to $115 million. In 2008, migration to a new operating system, UNIX, was completed. In late 2008, the agency began development of the Predictive Risk- based Evaluation for Dynamic Import Compliance Targeting (PREDICT), intended to replace the automated import admissibility screening module of OASIS, which relied on direct inputs of rules, providing risk ranking, automated database lookups, and warnings in the case of data anomalies or likely violations.  During this time, additional legacy systems were planned for inclusion in the program, and the agency also developed some of the support services envisioned such as firm management and a document repository. In 2009, the collection of legacy systems planned for the program was based on a wide variety of disparate technologies with redundant and inconsistent data. According to officials, the program received multiyear funding to resume development of the system based on the design from 2005. FDA awarded a master integrator contract in late 2009 for incremental development of MARCS by a single integrator. In May 2009, the agency rebaselined the program to accelerate delivery of functionality and include PREDICT. FDA’s rebaselined estimate for the life-cycle cost was $253.6 million with development costs of $143.3 million, based on completing most development in September 2014. According to FDA, in 2010, the agency updated and revalidated the program’s requirements. According to OMB exhibit 53s from 2004 to 2013, FDA has spent approximately $160 million from fiscal year 2002 to fiscal year 2011 on MARCS. Figure 4 shows these expenditures, as well as enacted spending for fiscal year 2012. In August 2011, FDA again rebaselined the program estimates to account for new legislative and regulatory requirements based on the FDA Food Safety Modernization Act. It estimated that the total life-cycle cost will be $282.7 million and planned to deploy a significant portion of MARCS and retire its legacy systems by July 2014. Table 5 provides details on the program estimates over time. To fulfill its regulatory mission, FDA’s CFSAN relies on various information systems. According to FDA documentation and interviews with agency officials, the center funds 21 databases and their associated systems. These systems fall into seven major categories such as registration, regulatory management, and adverse events. The following table provides details on the seven categories and a brief description of the systems that comprise them. In addition to the contact named above, key contributions were made to this report by Christie Motley, Assistant Director; Neil Doherty; Anh Le; Jason Lee; J. Chris Martin; Lee McCracken; Umesh Thakkar; Daniel Wexler; Merry Woo; and Charles Youman.
The Food and Drug Administration (FDA), an agency within the Department of Health and Human Services (HHS), relies heavily on information technology (IT) to carry out its mission of ensuring the safety and effectiveness of regulated consumer products. Specifically, IT systems are critical to FDA’s product review, adverse event reporting, and compliance activities. Recognizing limitations in its IT capabilities, the agency has undertaken various initiatives to modernize its systems. GAO was asked to (1) assess FDA’s current portfolio of IT systems, including the number of systems in use and under development, and their purpose and costs; (2) assess the status and effectiveness of FDA's efforts to modernize the mission-critical systems that support its regulatory programs; and (3) examine the agency's progress in effectively integrating and sharing data among key systems. To do this, GAO reviewed information on key FDA systems and interviewed agency officials to determine the status of systems and the effectiveness of key IT management practices, as well as data sharing among key systems. While FDA has taken several important steps toward modernizing its IT environment, much remains to be done. FDA reported spending about $400 million for IT investments in fiscal year 2011; however, the agency currently lacks a comprehensive IT inventory that identifies and provides key information about the systems it uses and is developing. Office of Management and Budget (OMB) and GAO guidance call for federal agencies to maintain such an inventory in order to monitor and manage their IT investments. This inventory should include information on each system, such as costs, functionality or purpose, and status. However, FDA does not have such a comprehensive list of its systems. Instead, the agency points to budget documents required by OMB, which included information on 44 IT investments for fiscal year 2011. The agency also provided a partial list of 21 mission-critical systems and modernization initiatives. Nonetheless, agency officials acknowledged that these documents do not identify all FDA’s systems or the complete costs, purpose, or status of each system. Until the agency has a complete and comprehensive inventory, it will lack critical information needed to effectively assess its IT portfolio. Much work remains on FDA’s largest and costliest system modernization effort—the Mission Accomplishments and Regulatory Compliance Services program. This program is estimated to cost about $280 million and is intended to enhance existing applications and develop new systems that provide information for inspections, compliance activities, and laboratory operations. However, much of the planned functionality has not been delivered and its completion is uncertain. Moreover, the program lacks an integrated master schedule identifying all the work activities that need to be performed and their interdependencies. FDA’s Chief Information Officer (CIO) stated that the agency is reevaluating the scope of the initiative. As a result, it is uncertain when or if FDA will meet its goals of replacing key legacy systems and providing modernized functionality to support its mission. In addition, FDA has not yet fully implemented key IT management capabilities essential for successful modernization, as previously recommended by GAO. These include developing an actionable IT strategic plan, developing an enterprise architecture to guide its modernization effort, and assessing its IT human capital needs. This is due in part to the fact that FDA’s IT management structure has been in flux. Since 2008, the agency has had five CIOs, hampering its ability to plan and effectively implement a long-range IT strategy. While the agency recently hired a CIO, without stable leadership and capabilities, the success of FDA’s modernization efforts is in jeopardy. The agency currently has initiatives under way to improve its data sharing with internal and external partners, including adoption of an enterprisewide standard for formatting data and several projects aimed at enhancing its ability to share data. Effective data sharing is essential to its review and approval process, inspection of imports and manufacturing facilities, and tracking of contaminated products. However, these projects have made mixed progress, and significant work remains for FDA to fully implement standardized data sharing. Further, FDA’s Center for Food Safety and Applied Nutrition has not comprehensively assessed information-sharing needs to ensure that its systems and databases are organized for effective information sharing. This is needed to help ensure more efficient access to and sharing of key information supporting its mission. GAO is recommending that FDA develop a comprehensive inventory of its IT systems, develop an integrated master schedule for a major modernization effort, and assess information needs to identify opportunities for greater sharing. In commenting on a draft of this report, HHS neither agreed nor disagreed with the recommendations but stated that FDA has taken actions to address many of the issues in the report.
To carry out NNSA’s nuclear weapons and nonproliferation missions, contractors at the eight NNSA sites conduct research, manufacturing, testing at facilities located at those sites. (See fig. 1.) DOE establishes safety or security requirements based on a categorization of site-specific risks, including hazardous operations and the presence of special nuclear material—material which can be used in producing nuclear weapons. Federal regulations define three categories of nuclear facilities based on the potential significance of radiological consequences in the event of a nuclear accident. These categories range from Hazard category 1 nuclear facilities with the potential for off-site radiological consequences, to Hazard category 2 nuclear facilities with the potential for significant on-site radiological consequences beyond the facility but which would be contained within the site, to Hazard category 3 nuclear facilities with the potential for significant radiological consequences at only the immediate area of the facility. In terms of security, DOE’s security orders establish levels of security protection according to a site’s types and quantities of special nuclear material. Special nuclear material is classified according to 4 levels—Category I Accordingly, DOE’s sites with (highest risk) to Category IV (lowest risk).Category I nuclear materials—including specified quantities and forms of special nuclear material such as nuclear weapons and nuclear weapons components—require the highest level of security since the risks may include the theft of a nuclear weapon or creation of an improvised nuclear device capable of producing a nuclear explosion. As discussed earlier, work activities to support NNSA’s national security missions are largely carried out by M&O contractors. This arrangement has historical roots. Since the Manhattan Project produced the first atomic bomb during World War II, DOE, NNSA, and predecessor agencies have depended on the expertise of private firms, universities, and others to carry out research and development work and operate the facilities necessary for the nation’s nuclear defense. Currently, DOE spends 90 percent of its annual budget on M&O contracts, making it the largest non- Department of Defense contracting agency in the government. NNSA’s M&O contractors are largely limited liability companies consisting of multiple member companies. Contractors at only two NNSA sites—the KCP and Sandia National Laboratories—are owned by a sole parent corporation. NNSA requires its contractors to adhere to federal laws, departmental regulations, and DOE and NNSA requirements that are provided in the department’s system of directives, including policies, orders, guides, and manuals. The agency incorporates directives into contracts and holds contractors accountable for meeting the associated requirements. Contractors, NNSA, DOE, and other organizations manage and oversee operations through a multitiered approach. First, contractors manage operations, conduct self-assessments, and perform corrective actions to maintain compliance with government expectations. Second, NNSA headquarters organizations (1) set processes and corporate expectations for contractors managing the sites, (2) have primary responsibility for ensuring contractors are performing and adhering to contract requirements, and (3) evaluate contractor performance. Third, NNSA’s field offices oversee the contractors on a daily basis. This includes on-site monitoring and evaluating contractor work activities. Fourth, entities outside of NNSA provide independent oversight of contractor performance. In particular, DOE’s Office of Health, Safety, and Security (HSS), is responsible for, among other things, developing the department’s safety and security policy, providing independent oversight of contractor compliance with DOE’s safety and security regulations and directives, and conducting enforcement activities. The Defense Nuclear Facilities Safety Board also provides oversight of nuclear safety that is independent of NNSA and DOE. In implementing its new management and oversight approach in 2007, KCP implemented reforms that sought to (1) streamline operating requirements, (2) refocus federal oversight, and (3) provide clear contractor goals and meaningful incentives. KCP reported that these actions produced a number of benefits, including cost reductions at the site. According to the KCP Field Office and contractor, KCP under took the following actions: Streamlined operating requirements. The KCP Field Office sought to streamline operating requirements and limit the imposition of new DOE requirements in the future. These changes included eliminating, where possible, some DOE directives and replacing others with industry or site-specific standards, such as quality assurance requirements and emergency management requirements. The contractor remained obligated to meet all applicable federal laws and regulations. According to the contractor, by 2009, the site had reduced 160 operating requirements from specific DOE orders, regulations, and other standards to 71 site operating requirements. For example, the site replaced requirements from DOE’s quality assurance order with quality assurance processes outlined in the International Standards Organization’s Standard 9001-2008, an international standard used in private industry to ensure that quality and continuous improvement are built into all work processes. KCP was also able to eliminate its on-site fire department by relying on municipal firefighting services to fulfill a DOE requirement for site fire protection capability. To help limit future growth of requirements, KCP implemented a directives change control board. This group, with joint federal Field Office and contractor staff membership, reviews new or revised directives to determine their applicability to the contract, rejecting those requirements not deemed to be relevant. According to a KCP Field Office official, since 2007, the board has rejected 235 of 370 new directives issued by DOE and other sources. The KCP Field Office official noted that reasons for rejecting directives include their inapplicability to a nonnuclear site or because the new directive requirements were already covered in KCP’s site-specific standards. Refocused federal oversight. The KCP Field Office sought to refocus federal oversight by (1) changing its approach from reviewing compliance with requirements to monitoring contractor assurance systems for lower-risk activities; (2) exerting greater control over audit findings at the field office level; and (3) increasing its use of external reviewers. First, the field office changed its oversight approach from reviewing compliance of all contractor activities to allowing the contractor to assume responsibility for ensuring performance in lower- risk activities, allowing federal staff to concentrate resources on monitoring high-risk activities such as safety and security. In this approach, the field office moved from a traditional “transactional” oversight—in which performance is determined by federal oversight staff checking compliance against requirements—to a “systems- based” oversight—in which performance on lower-risk activities is ensured by monitoring the contractor’s systems, processes, and data, including its systems of self-assessment and actions to correct problems. According to KCP Field Office officials, federal oversight staff assumed the role of reviewing the contractor’s management and oversight systems, as well as reviewing selected data provided by these systems, to ensure adequate processes were in place to identify and correct problems. Second, the field office exerted greater control over audit findings from external reviews, by determining which findings would need to be addressed by the contractor. According to KCP Field Office and contractor officials, this ability to accept or reject audit findings from external reviews enabled the field office to prevent implementation of new requirements that would not be applicable at the site. According to another KCP Field Office official, although the field office had this authority under the reforms, it had not rejected any audit findings. Finally, to revise its oversight approach, the KCP Field Office relied more on third-party assessments or certifications of contractor performance in place of federal oversight reviews, according to a field office official. Such assessments included those by the contractor’s parent corporation, as well as external groups, such as the Excellence in Missouri Foundation, which administers the Missouri Quality Award to promote quality in business in the state. Clear contractor goals and meaningful incentives. KCP Field Office officials noted that, under the reforms, the Field Office and the contractor agreed on five outcome areas for contractor performance, and performance award fees were linked to these outcome areas. This differed from the previous approach under which performance award fees were linked to meeting headquarters expectations and directive requirements. KCP Field Office officials noted this allowed them to focus performance award fee on “what” a contractor does, rather than on “how” it meets requirements. Under the reforms, the five outcome areas on which performance would be evaluated included: (1) meeting product schedule; (2) meeting product specification; (3) managing cost; (4) managing assets and resources, including facilities, inventory, and staff; and (5) meeting contract standards. Under the reforms, each year, the KCP Field Office highlighted performance areas of major importance to encourage the contractor to focus resources on those areas, rather than expending resources on what the field office and contractor agrees are less important goals and requirements. In this framework, the contractor is eligible to earn the majority of associated fees as long as adequate performance was achieved. According to the KCP Field Office implementing plan, this differed from the previous approach, under which the contractor needed to exceed performance expectations to earn more than 60 percent of an award fee. KCP Field Office officials noted a key to effective contract management under the reforms was the ability of the field office to hold the contractor accountable by focusing fee on desired outcomes. In implementing the reforms, the site reported it was able to reduce costs in its initial year of implementation, some of which was achieved by decreasing oversight staff. A January 2008 review commissioned by the KCP Field Office to assess cost savings resulting from implementing the reforms reported the Field Office achieved a cost reduction of $936,000 in fiscal year 2007 by eliminating, through attrition, eight full-time staff positions. The total savings this review reported was nearly $14 million (fiscal year 2006 dollars) which comprised cost reductions that had been achieved in fiscal 2007 directly or indirectly by implementing the KCP reforms. This reported $14 million in cost reductions was about 3 percent of the site’s overall fiscal year 2007 budget of about $434 million. According to a KCP Field Office official, no further analyses of cost savings has been conducted since that time. Reviews of the reforms, as well as NNSA and KCP Field Office and contractor officials, cited several important factors that assisted with implementation of the reforms at the site. Key factors included having (1) high-level support from leadership for reforms, (2) site specific conditions and operations, and (3) a cooperative federal-contractor partnership. High-level support from NNSA and field office leadership and key stakeholders. According to a 2008 KCP Field Office review of lessons learned from implementing the reforms, gaining and maintaining the support of the NNSA Administrator and buy-in from some of the KCP federal staff for changes was critical to their implementation. With the support of the NNSA Administrator, the KCP Field Office Manager was given clear authority and responsibility to make the changes necessary to implement the reforms. According to the 2008 review, implementation required getting support from federal staff at the site, whose oversight activities were likely to change because of the reforms. The KCP Field Office Assistant Manager told us field office staff involved in oversight at the site were initially reluctant to make the necessary changes to their oversight activities—such as moving away from a compliance-type oversight approach to relying on reviews of contractor assurance systems—but they ultimately agreed to the changes. The 2008 KCP Field Office review of lessons learned noted that acceptance by stakeholders was more easily obtained for reforms such as applying industry standards because of the unique operations at KCP, which included lower risk, nonnuclear activities. These stakeholders included program offices within NNSA. Other stakeholders were more qualified in their support. For example, DOE’s HSS reported in a March 2008 review of the KCP reforms that, overall, the reform framework had the potential for providing sufficient federal oversight at reduced cost for the site. The report also found, however, that some weaknesses existed in implementing the reforms, such as the field office not being able to complete a significant percentage of scheduled security oversight reviews and observations in fiscal year 2007 due to staffing shortages and not having adequate reviews of site-specific standards for safeguards and security. Unique site conditions and operations. In selecting KCP to implement the reforms in 2006, the NNSA Administrator noted that, in comparison to NNSA’s other sites, unique conditions existed at the site that enabled implementation of the proposed reforms. These conditions included (1) KCP operations, which are largely manufacturing, were comparable to those of commercial industry, most notably the aerospace industry; (2) activities at the site were largely lower-risk, nonnuclear, and generally did not involve or potentially affect nuclear safety and security; and (3) the site contractor was owned by a single corporate parent—Honeywell—that has, according to a Field Office official, well-developed corporate management systems and a commitment to quality. In addition, the implementation of reforms at KCP was undertaken at a time of broader operational changes at KCP. More specifically, NNSA was in the process of modernizing KCP operations to lower operations and maintenance costs. This included building and relocating to a new modernized production facility and increasing the use of external suppliers for nonnuclear components rather than producing the components in-house. According to a KCP Field Office official, as of April 2014, more than 70 percent of operations had been moved to the new facility. A cooperative federal-contractor partnership. The KCP Field Office noted in its April 2008 review of lessons learned from implementing the reforms that development of the reforms was enabled because of a cooperative relationship between the field office and the contractor. According to the review, a steering committee with members from both the KCP Field Office and the contractor managed the implementation of the reforms. These members agreed to the overall objectives and key elements of the reforms early in the process and worked together to develop those key reforms. According to this field office review, this cooperative relationship not only eased implementation of the reforms but assisted in gaining approval for the reforms from NNSA and DOE headquarters officials. The January 2008 study assessing cost reductions resulting from implementing the reforms found that this cooperation between site federal and contractor officials had developed over a period of years. In addition, KCP Field Office officials told us that having the leadership and involvement by the contractor’s parent corporation resulted in greater accountability. According to a 2009 review commissioned by NNSA to assess the reforms, the parent corporation was responsible for setting core processes and policies, determining best practices to be implemented, and ensuring the field office maintained transparency in how the site was managed. This was a change from the previous approach, whereby the contractor adhered to NNSA-set expectations and requirements. In addition, under the reforms, the contractor was allowed to leverage corporate management systems, in place of DOE-required systems to manage work and performance. KCP Field Office officials noted that, although the contractor was held responsible for the agreed-upon mission performance outcomes, it fell to both the contractor and the parent company to fix any problems. According to the 2009 review, allowing the contractor to use corporate management systems resulted in encouraging the parent company to take a more active part in providing oversight. Since the 2007 implementation of reforms at KCP, NNSA has taken steps to extend some elements of the site’s reforms at other NNSA sites and to integrate the reforms into subsequent agency-wide initiatives to improve contractor performance and accountability. However, NNSA is revisiting the reforms following a July 2012 security breach at one of its sites, and NNSA’s future plans to continue extending KCP-like reforms at its other sites are currently uncertain. After KCP undertook implementation of its reforms in 2007, NNSA began to implement similar reforms at selected sites and subsequently, incorporated elements of the reforms into agency-wide initiatives to improve oversight and management of M&O contractors. At the site level, in 2009, the NNSA Administrator formed an internal team to look at ways of accelerating efforts to implement KCP-like reforms at other NNSA sites, where appropriate. In addition, in February 2010, the NNSA Administrator tasked officials at the Sandia National Laboratories and Nevada Test Site with implementing reforms similar to those implemented at KCP for nonnuclear activities. These two sites were to, among other things, (1) streamline operating requirements by identifying opportunities to eliminate some agency requirements and make greater use of industry standards; (2) refocus federal oversight by, among other things, making greater use of the contractor’s management system; and (3) set clear contractor goals and meaningful incentives following the KCP approach. The two sites were tasked with identifying cost efficiencies associated with implementing these reforms. In 2010, NNSA issued two Policy Letters that sought to streamline security requirements for the control of classified information, such as classified documents and electronic media, and on the physical protection of facilities, property, personnel, and national security interests, such as special nuclear material. These two policy letters were included in NNSA’s M&O contracts in place of the corresponding DOE directives. Subsequently, in 2011, NNSA issued a new policy for all of its sites that outlined basic requirements for a new oversight and management approach that had roots in the KCP reforms. This new policy—called “transformational governance”—directed, for example, site oversight staff to focus greater efforts on assessing contractor performance in higher- risk activities, such as security, and for lower-risk activities, rely more heavily on monitoring contractor assurance systems. More broadly, DOE was undertaking similar reforms during this period. Specifically, in March 2010, the Deputy Secretary of Energy announced an initiative to revise DOE’s safety and security directives by streamlining or eliminating duplicative requirements, revising federal oversight and encouraging greater use of industry standards. As we reported in 2012, DOE’s effort resulted in reducing the overall number of directives. For example, DOE reduced its number of safety directives from 80 to 42. However, according to NNSA officials, since the July 2012 security breach at NNSA’s Y-12 National Security Complex in Oak Ridge, Tennessee, some of NNSA’s efforts to extend KCP-like reforms to other sites have been placed on hold or are being revised, and NNSA’s plans on how to further implement KCP-like reforms are still being determined. DOE and NNSA reviews of the security breach indicated that its underlying causes may have been related to implementation of reforms similar to some of those implemented at KCP. For example, a 2012 review of the security breach by the DOE’s Office of Inspector General noted that a breakdown in oversight, specifically one based on monitoring the contractor’s systems instead of compliance with requirements, did not alert site officials to conditions that led to the breach. In the aftermath of the security breach, NNSA and DOE have moved cautiously to reevaluate or revise reforms, and agency officials told us it is still determining how reforms will be implemented in the future. NNSA is currently reevaluating how to implement some of the principal aspects of the KCP reforms identified earlier in this report—streamlining requirements, refocusing federal oversight, and establishing clear contractor goals, including: Streamlining operating requirements. Since the July 2012 Y-12 security breach, NNSA has been reassessing the need for some NNSA-specific policies. For example, NNSA initiated actions to rescind certain NNSA security policies and reinstate DOE’s security directives. NNSA initiated these actions in response to a recommendation made in 2012 by the NNSA Security Task Force—a task force established by the NNSA Administrator in August 2012 to assess NNSA’s security organization and oversight in the wake of the Y-12 security breach. As of March 2014, according to NNSA officials, NNSA sites were in varying stages of incorporating the DOE directives into their contracts and implementing the associated requirements. Refocusing federal oversight. Since the July 2012 Y-12 security breach, NNSA has been reviewing the use of contractor assurance systems in its oversight model and for evaluating contractor performance. According to a February 2013 report by the Office of Inspector General, the July 2012 Y-12 security breach highlighted the negative outcomes that may result when contractor assurance systems are too heavily relied on for federal oversight. The February 2013 report noted that the Y-12 contractor’s assurance system did not identify or correct major security problems that led to the security breach, and that while federal oversight staff knew of some security problems, they believed that the agency’s oversight approach of relying on the contractor assurance system prevented them from intervening in contractor activities to correct problems. In reevaluating NNSA’s oversight approaches, according to the Associate Principal Deputy Administrator, the agency is continuing to work on establishing contractor assurance systems but is moving toward using these systems to enable, rather than replace, federal oversight. In addition, according to the official, NNSA has recommitted to strengthening oversight, both by working to ensure sufficient oversight staff are in place in field offices and by leveraging independent oversight by DOE’s HSS. According to NNSA’s Acting Assistant Administrator for Infrastructure and Operations, as of February 2014, the agency was looking at opportunities to evaluate how best to use contractor assurance systems and data in federal oversight of contractor performance and was currently revising its oversight policy. Setting clear contractor goals and meaningful incentives. Prior to the July 2012 Y-12 security breach, NNSA had been reassessing how it evaluated contractor performance and held contractors responsible for meeting agency goals. In fiscal year 2013, NNSA introduced its Strategic Performance Evaluation Plan, which lays out broad, common goals to which each site must contribute to achieve the overall agency mission. According an NNSA headquarters official, the plan streamlines NNSA evaluation of contractor performance by focusing on each site’s contribution to the common set of desired agency outcomes—such as its nuclear weapons mission, and science and technology objectives. The official indicated that NNSA will evaluate each site using a standardized set of ratings as defined in regulation to replace the previous system of unique site-office- developed and site-office-evaluated performance ratings. According to the NNSA official, the Strategic Performance Evaluation Plan should help ensure consistent performance evaluation across the enterprise. Although some opportunities may exist for implementing KCP-like reforms at other NNSA sites, since the Y-12 security breach, NNSA officials and studies we reviewed noted that key factors enabling implementation of the reforms at KCP may not be present across the nuclear security enterprise. As noted above, these factors include having (1) high-level support for such reforms at NNSA headquarters; (2) specific site conditions to enable implementation, such as having a contractor with a single parent corporation and work activities that are solely nonnuclear in nature; and (3) a cooperative federal-contractor relationship. First, regarding high-level headquarters support for extending the KCP reforms, NNSA’s Acting Assistant Administrator for Infrastructure and Operations told us, in February 2014, that critical organizational issues, such as clarifying headquarters’ organization and establishing field office roles and responsibilities for overseeing contractors, were still being discussed within NNSA and need to be settled before moving forward on KCP-like reforms. Second, most NNSA sites differ considerably from KCP (see table 1). For example, reports we reviewed noted that, because most NNSA sites are managed and operated by limited-liability companies made up of multiple member companies, instead of by a single parent corporation, adopting the reforms elsewhere would be challenging. According to the January 2008 study commissioned by the KCP to assess cost reductions from implementing the reforms, having multiple corporate partners could limit successful implementation of KCP-like reforms at other NNSA sites. Specifically, the study notes that a single corporate parent can more easily use existing corporate systems to oversee and manage its subsidiary M&O entity, whereas this model may not work with an M&O having multiple member organizations. In addition, as noted above, the KCP M&O contractor’s parent company was a Fortune 100 company with, according to a KCP Field Office official, a strong commitment to quality. In addition, an April 2008 KCP Field Office review of the reforms noted that implementation was enabled at KCP because the site activities were considered low-risk and nonnuclear. The review stated that it was not clear how to apply similar reforms to other NNSA sites, most of which have some nuclear operations, nuclear or other high-risk materials, or nuclear waste requiring disposition. Further, the March 2008 review by the department’s HSS noted that KCP is a unique operation within NNSA and that careful analysis would need to be done if consideration will be given to applying the reforms to other sites, particularly where hazards are more complex or where the contractor’s ability to self-identify and correct program weaknesses is not mature. Third, the January 2008 cost reductions study noted that having a single parent company governing the KCP M&O contractor for decades resulted in establishing a cooperative relationship between the federal government and its contractor. More specifically the study noted that successful implementation of reforms at KCP resulted, in part, from the mutual trust built between the field office and contractor staff. However, a February 2012 National Research Council report that examined NNSA’s management of its three national security laboratories found there had been an erosion of trust between NNSA and its laboratories, and it recommended the agency work toward rebuilding positive relationships with its laboratories. Diminished trust between NNSA and its sites was also highlighted in a recently issued report by a congressional advisory panel, which described the relationship as “dysfunctional.” During the course of our work, in December 2013, the National Defense Authorization Act for Fiscal Year 2014 was enacted. act required the NNSA Administrator to develop a feasibility study and plan for implementing the principles of the KCP pilot to additional facilities in the national security enterprise by June 2014. We agree that further study of the applicability, costs, and benefits of the KCP reforms is warranted, and, in light of the congressional direction to NNSA, we are not making recommendations at this time. We provided a draft of this report to NNSA for its review and comment. In written comments, reproduced in appendix I, NNSA generally concurred with the overall findings of the report. The agency noted that it continues to study the appropriateness of further expansion of the Kansas City Pilot oversight reforms to other sites and implementation of NNSA’s governance policy. NNSA also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. Pub. L. No. 113-66, 127 Stat. 672. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. In addition to the individual named above, Jonathan Gill, Assistant Director; Nancy Kintner-Meyer; Cynthia Norris; and Kiki Theodoropoulos made key contributions to this report.
NNSA, a separately organized agency within DOE, has had long-standing problems managing its contracts and projects, which GAO has identified as being at high risk for fraud, waste, abuse, and mismanagement. Both DOE and, specifically, NNSA, undertook initiatives in 2002 and 2003 to improve contractor performance through revised federal oversight and greater contractor accountability. In 2006, concerned that efforts were moving too slowly, the NNSA Administrator tasked its KCP Field Office and contractor with implementing reforms at that site. House Report 113-102, accompanying H.R. 1960, an early version of the National Defense Authorization Act for Fiscal Year 2014 mandated GAO to review the KCP reforms and issues with extending them to other NNSA sites. This report, among other things, (1) identifies key reforms implemented at KCP and reported benefits; (2) describes key factors NNSA and others identified as helping the site implement reforms; and (3) provides information on how NNSA has implemented and plans to implement similar reforms at other sites. GAO reviewed relevant documents prepared by NNSA, DOE, contractors, and others; visited KCP; and discussed the reforms with cognizant federal officials and contractor staff. During GAO's review, Congress required NNSA to develop a study and plan for implementing the principles of the Kansas City reforms at its other sites. In light of the congressional requirement, GAO is not making additional recommendations at this time. NNSA generally agreed with the findings of this report. Key reforms at the National Nuclear Security Administration's (NNSA) Kansas City Plant (KCP)—a site in Missouri that manufactures electronic and other nonnuclear components of nuclear weapons—included (1) streamlining operating requirements by replacing Department of Energy (DOE) requirements with industry standards, where appropriate; (2) refocusing federal oversight to rely on contractor performance data for lower-risk activities; and (3) establishing clear contractor goals and incentives. A 2008 review of the reforms reported nearly $14 million in cost reductions were achieved at the site by implementing these reforms. NNSA and KCP federal and contractor staff identified key factors that facilitated implementation of reforms at KCP, including the following: High-level support from NNSA and field office leadership . Gaining and maintaining the support of the NNSA Administrator and buy-in of some KCP Field Office staff for changes from the reforms was critical. Unique site conditions and operations . Conditions at KCP enabled implementation of the proposed reforms, including (1) the comparability of the site's activities and operations to those of commercial industry; (2) the site's relatively low-risk, nonnuclear activities generally did not involve or potentially affect nuclear safety and security; and (3) the site was managed by a contractor owned by a single corporate parent with a reputation for quality. A cooperative federal-contractor partnership . A cooperative relationship between the KCP Field Office and the contractor facilitated implementation of the reforms. NNSA has extended to other sites some elements of the reforms, including (1) encouraging greater use of industry standards, where appropriate; (2) directing field office oversight staff to rely more on contractor self-assessment of performance for lower-risk activities; and (3) setting clearer contractor goals by revising how the agency evaluates annual contractor performance. However, NNSA and DOE are re-evaluating implementation of some of these reforms after a July 2012 security breach at an NNSA site, where overreliance on contractor self-assessments was identified by reviews of the event as a contributing factor. Moreover, NNSA officials and other studies noted that key factors enabling implementation of reforms at KCP may not exist at NNSA's other sites. For example, most NNSA sites conduct high-hazard activities, which may involve nuclear materials and require higher safety and security standards than KCP. NNSA is evaluating further implementation of such reforms and expects to report to Congress its findings later in 2014.
In November 2003, Congress authorized a new performance-based pay system for members of the SES. According to OPM’s interim regulations, SES members are to no longer receive annual across-the-board or locality pay adjustments with the new pay system. Agencies are to base pay adjustments for SES members on individual performance and contributions to the agency’s performance by considering such things as the unique skills, qualifications, or competencies of the individual and their significance to the agency’s mission and performance, as well as the individual’s current responsibilities. Specifically, the revised pay system, which took effect in January 2004, replaces the six SES pay levels with a single, open-range pay band and raises the pay cap for all SES members to $145,600 in 2004 (Level III of the Executive Schedule) with a senior executive’s total compensation not to exceed $175,700 in 2004 (Level I of the Executive Schedule). If OPM certifies and OMB concurs that the agency’s performance management system, as designed and applied, makes meaningful distinctions based on relative performance, an agency can raise the SES pay cap to $158,100 in 2004 (Level II of the Executive Schedule) with a senior executive’s total compensation not to exceed $203,000 in 2004 (the total annual compensation payable to the Vice President). In an earlier step, to help agencies hold senior executives accountable for organizational results, OPM amended regulations for senior executive performance management in October 2000. These amended regulations governing performance appraisals for senior executives require agencies to establish performance management systems that (1) hold senior executives accountable for their individual and organizational performance by linking performance management with the results-oriented goals of the Government Performance and Results Act of 1993 (GPRA); (2) evaluate senior executive performance using measures that balance organizational results with customer satisfaction, employee perspectives, and any other measures agencies decide are appropriate; and (3) use performance results as a basis for pay, awards, and other personnel decisions. Agencies were to establish these performance management systems by their 2001 senior executive performance appraisal cycles. High-performing organizations have recognized that their performance management systems are strategic tools to help them manage on a day-to- day basis and achieve organizational goals. While Education, HHS, and NASA have undertaken important and valuable efforts to link their career senior executive performance management systems to their organizations’ success, senior executives’ perceptions indicate that these three agencies have opportunities to use their career senior executive performance management systems more strategically to strengthen that link. Based on our survey of career senior executives, we estimate that generally less than half of the senior executives at Education, HHS, and NASA feel that their agencies’ are fully using their performance management systems as a tool to manage the organization or to achieve organizational goals, as shown in figure 1. Further, effective performance management systems are not merely used for once-or twice-yearly individual expectation setting and rating processes. These systems facilitate two-way communication throughout the year so that discussions about individual and organizational performance are integrated and ongoing. Effective performance management systems work to achieve three key objectives: (1) they strive to provide candid and constructive feedback to help individuals maximize their contribution and potential in understanding and realizing the goals and objectives of the organization, (2) they seek to provide management with the objective and fact-based information it needs to reward top performers, and (3) they provide the necessary information and documentation to deal with poor performers. In this regard as well, generally less than half of the senior executives felt that their agencies are fully using their performance management systems to achieve these objectives, as shown in figure 2. High-performing organizations have recognized that a critical success factor in fostering a results-oriented culture is a performance management system that creates a “line of sight” showing how unit and individual performance can contribute to overall organizational goals and helping them understand the connection between their daily activities and the organization’s success. Further, our prior work has identified nine key practices public sector organizations both here and abroad have used that collectively create this line of sight to develop effective performance management systems. To this end, while Education, HHS, and NASA have begun to apply the key practices to develop effective performance management systems for their career senior executives, they have opportunities to strengthen the link between their senior executives’ performance and organizations’ success. An explicit alignment of daily activities with broader results is one of the defining features of effective performance management systems in high- performing organizations. These organizations use their performance management systems to improve performance by helping individuals see the connection between their daily activities and organizational goals and encouraging individuals to focus on their roles and responsibilities to help achieve these goals. Education, HHS, and NASA require their senior executives to align individual performance with organizational goals in order to hold them accountable for organizational results. Our review of the senior executives’ performance plans showed that all the plans at each agency identified individual performance expectations that aligned with organizational goals. In addition, nearly all of the senior executives at each agency have reported that they communicate their performance expectations to at least a small extent to those whom they supervise. Cascading performance expectations in this way helps individuals understand how they contribute to organizational goals. Still, while most senior executives at each agency indicated that they see a connection between their daily activities and organizational goals to a very great or great extent, fewer of these senior executives felt that their agency’s SES performance management system holds them accountable for their contributions to organizational results to a very great or great extent, as shown in figure 3. These responses are generally consistent with our governmentwide surveys on the implementation of GPRA. In particular, governmentwide, senior executives have consistently reported that they are held accountable for results. Most recently, we reported in March 2004 that 61 percent of senior executives governmentwide feel they are held accountable for achieving their agencies’ strategic goals to a very great or great extent. To reinforce the accountability for achieving results-oriented goals, we have reported that more progress is needed in explicitly linking senior executives' performance expectations to the achievement of these goals. Setting specific levels of performance that are linked to organizational goals can help senior executives see how they directly contribute to organizational results. While most senior executives at HHS have set specific levels of performance in their individual performance plans, few senior executives in Education and NASA have identified specific levels. HHS requires its senior executives to set measurable performance expectations in their individual performance plans that align with organizational priorities, such as the department’s “One-HHS” objectives and strategic goals and their operating divisions’ annual performance goals or other priorities. We found that about 80 percent of senior executives’ performance plans identified specific levels of performance linked to organizational goals. For example, a senior executive in CDC set an expectation to “reduce the percentage of youth (grade 9-12) who smoke to 26.5%,” which contributes to CDC’s annual performance goal to “reduce cigarette smoking among youth” and the One-HHS program objective to “emphasize preventive health measures (preventing disease and illness).” However, specifying levels of performance varies across operating divisions. We found that approximately 63 percent of senior executives at FDA versus 80 percent at CDC identified specific levels of performance linked to organizational goals in their individual performance plans. Education requires its senior executives to include critical elements, each with specific performance requirements, in their individual performance plans that align with the department’s goals and priorities, including the President’s Management Agenda, the Secretary’s strategic plan, the Blueprint for Management Excellence, and the Culture of Accountability. We found that approximately 5 percent of senior executives’ performance plans identified specific levels of performance linked to organizational goals. NASA requires its senior executives to include seven critical elements, each with specific performance requirements that focus on the achievement of organizational goals and priorities, in their individual performance plans. For example, senior executives’ performance plans include the critical element “meets and advances established agency program objectives and achieves high-quality results,” and specifically “meets appropriate GPRA/NASA Strategic Plan goals and objectives.” Senior executives may modify the performance requirements by making them more measurable or specific to their jobs; however, only about 23 percent of senior executives added performance requirements that are specific to their positions in their individual performance plans. Also, about 1 percent of senior executives have performance expectations with specific levels of performance that are related to organizational goals in their individual plans. As public sector organizations shift their focus of accountability from outputs to results, they have recognized that the activities needed to achieve those results often transcend specific organizational boundaries. Consequently, organizations that focus on collaboration, interaction, and teamwork across organizational boundaries are increasingly critical to achieve results. In a recent GAO forum, participants agreed that delivering high performance and achieving goals requires agencies to establish partnerships with a broad range of federal, state, and local government agencies as well other relevant organizations. High-performing organizations use their performance management systems to strengthen accountability for results, specifically by placing greater emphasis on collaboration to achieve results. While most senior executives in each agency indicated that they collaborate with others to achieve crosscutting goals, fewer of these senior executives felt that their contributions to crosscutting goals are recognized through their agency’s system, as shown in figure 4. We reported that more progress is needed to foster the necessary collaboration both within and across organizational boundaries to achieve results. As a first step, agencies could have senior executives identify specific programmatic crosscutting goals that would require collaboration to achieve in their individual performance plans. As a next step, agencies could have senior executives name the relevant internal or external organizations with which they would collaborate to reinforce a focus across organizational boundaries. HHS, Education, and NASA are connecting performance expectations to crosscutting goals to varying degrees. While HHS does not require executives to identify programmatic crosscutting goals specific to the individuals in their performance plans, according to an agency official, it holds all senior executives accountable for the crosscutting One-HHS program objectives, such as to increase access to health care. We found that about 67 percent of senior executives’ performance plans identified a programmatic crosscutting goal that would require collaboration to achieve, as shown in figure 5. The extent to which the senior executives’ performance plans identified crosscutting goals varied across operating divisions. For example, 60 percent of the senior executives’ plans in FDA identified crosscutting goals compared with 50 percent of the plans in CDC. Few HHS senior executives clearly identified the specific organization(s) either internal or external with which they would collaborate. Positive examples of senior executives’ plans at HHS that included crosscutting goals, as well as either the internal or external organizations with which they would collaborate to achieve these goals, include the following: A senior executive in the National Institutes of Health set an expectation to work with FDA and other agencies and organizations to accelerate drug development by specifically working on the National Cancer Institute/FDA task force to eliminate barriers and speed development. A senior executive in the Substance Abuse and Mental Health Services Administration set an expectation to work collaboratively with the Office of National Drug Control Policy, the Department of Energy, and the Office of Juvenile Justice and Delinquency Prevention to increase the use of the National Registry of Effective Programs in other federal agencies to identify and provide for early intervention for persons with or who are at risk for mental health or substance abuse problems. As required by Education, all senior executives’ performance plans included the general performance expectation: “promotes collaboration and teamwork, including effective union-management relations, where appropriate.” However, only about 32 percent of senior executives’ performance plans identified programmatic crosscutting goals on which they would collaborate and few executives clearly identified the specific organizations with which they would collaborate, as shown in figure 6. As required by NASA, all senior executives’ performance plans included a general expectation: “integrates One-NASA approach to problem-solving, program/project management, and decision making. Leads by example by reaching out to other organizations and NASA centers to collaborate on work products; seeks input and expertise from a broad spectrum .…” This expectation is designed to contribute to achieving NASA’s mission. Only about 1 percent of the executives clearly identified specific centers in NASA and none of the executives clearly identified specific organizations outside of NASA that they need to collaborate with to achieve crosscutting goals. High-performing organizations provide objective performance information to executives to show progress in achieving organizational results and other priorities, such as customer satisfaction and employee perspectives, and help them manage during the year, identify performance gaps, and pinpoint improvement opportunities. We reported that disaggregating performance information in a useful format could help executives track their performance against organizational goals and compare their performance to that of the organization. HHS, NASA, and Education took different approaches to providing performance information to their senior executives in order to show progress toward organizational goals or priorities. While all three agencies give their components the flexibility to collect and provide performance information to their senior executives, Education also provides performance information agencywide. Of the senior executives in Education, HHS, and NASA who reported that their agency provided performance information to track their work unit’s performance, generally less than half found the performance information to be useful for making improvements, available when needed, or both to a very great or great extent, as shown in figure 7. Education provides various types of performance information to senior executives intended to help them see how they are meeting the performance expectations in their individual performance plans. A tracking system monitors how Education is making progress toward its annual performance goals and supporting action steps. Each action step has milestones that are tracked and reported each month to the officials who developed and have “ownership” for them. Education also collects performance information on customer service and employee perspectives. For example, Education uses an automated performance feedback process, whereby customers, coworkers, and employees provide feedback at midcycle and the end of the performance appraisal cycle on how the senior executives are meeting their individual performance expectations and areas for improvement. HHS conducts an annual departmentwide quality of work life survey and disaggregates the survey results for executives and other employees to use. HHS compares the overall high or low results of its survey for HHS as a whole to each operating division and to the component organizations within operating divisions. In the 2003 survey, HHS added questions about the President’s Management Agenda, and each operating division had the opportunity to add specific questions focusing on issues that it believed were important to its employees, such as flexible work schedules or knowledge management issues. In addition, HHS gives operating divisions the flexibility to use other means of collecting and providing performance information, and in turn, FDA and CDC give their centers and offices the flexibility to collect and provide performance information. For example, according to a CDC official, senior executives receive frequent reports, such as the weekly situation reports, to identify priorities and help communicate these priorities among senior executives. In addition, CDC conducts a “pulse check” survey to gather feedback on employees’ satisfaction with the agency and disaggregates the results to the center level. According to an agency official, CDC plans to conduct this survey quarterly. An official at NASA indicated that while NASA does not systematically provide performance information to its senior executives on a NASA-wide basis, the centers have the flexibility to collect and provide performance information to their senior executives on programs’ goals and measures and customer and employee satisfaction. This official indicated that NASA uses the results of the OPM Human Capital survey to help identify areas for improvement throughout NASA and its centers. NASA provides the OPM Human Capital survey data to its centers, showing NASA-wide and center- specific results, to help centers conduct their own analyses and identify areas for improvement and best practices. High-performing organizations require individuals to take follow-up actions based on the performance information available to them. By requiring and tracking such follow-up actions on performance gaps, these organizations underscore the importance of holding individuals accountable for making progress on their priorities. Within Education, only the senior executives who developed the action steps for the annual performance goals are to incorporate expectations to demonstrate progress toward the goal(s) in their individual plans. HHS and NASA do not require senior executives to take follow-up actions agencywide, but they encourage their components to have executives take follow-up actions to show progress toward the organizational priorities. Of the senior executives at each agency who indicated that they took follow-up actions on areas of improvement, generally less than two-thirds felt they were recognized through their performance management systems for such actions, as shown in figure 8. At Education, senior executives who developed the action steps for Education’s annual goals are required to set milestones that are tracked each month using a red, yellow, or green scoring system; assess how they are progressing toward the action steps and annual goals; and revise future milestones, if necessary. According to agency officials, these senior executives are to incorporate these action steps when developing their individual performance plans. Senior executives are also to address the feedback that their supervisors provide about their progress in achieving their performance expectations. HHS as a whole does not require senior executives to take follow-up actions, for example, on the quality of work life survey results, or incorporate the performance information results into their individual performance plans. In addition, FDA and CDC do not require their senior executives agencywide to take any type of follow-up actions. However, FDA centers have the flexibility to require their senior executives to identify areas for improvement based on the survey results or other types of performance information. Similarly, CDC encourages its executives to incorporate relevant performance measures in their individual performance plans. For example, those senior executives within each CDC center responsible for issues identified at emerging issues meetings are required to identify when the issues will be resolved, identify the steps they will take to resolve the issues in action plans, and give updates at future meetings with the CDC Director and other senior officials. NASA does not require its senior executives to take follow-up actions agencywide on the OPM Human Capital Survey data or other types of performance information, rather it encourages its centers to have their executives take follow-up action on any identified areas of improvement. However, an agency official stated that NASA uses the results of the survey to identify areas for improvement and that managers are ultimately accountable for ensuring the implementation of the improvement initiatives. High-performing organizations use competencies to examine individual contributions to organizational results. Competencies, which define the skills and supporting behaviors that individuals are expected to demonstrate to carry out their work effectively, can provide a fuller picture of individuals’ performance in the different areas in which they are appraised, such as organizational results, employee perspectives, and customer satisfaction. We have reported that core competencies applied organizationwide can help reinforce behaviors and actions that support the organization’s mission, goals, and values and can provide a consistent message about how employees are expected to achieve results. Education and NASA identified competencies that all senior executives in the agency must include in their performance plans, while HHS gave its operating divisions the flexibility to have senior executives identify competencies in their performance plans. Most of the senior executives in each agency indicated that the competencies they demonstrate help them contribute to the organization’s goals to a very great or great extent. However, fewer of these executives felt that they were recognized through their performance management system for demonstrating these competencies, as shown in figure 9. Education requires all senior executives to include a set of competencies in their individual performance plans. Based on our review of Education’s senior executives’ performance plans, we found that all of the plans, unless otherwise noted, included the following examples of competencies. Organizational results—“sets and meets challenging objectives to achieve the Department’s strategic goals.” Employee perspectives—“fosters improved workforce productivity and effective development and recognition of employees.” Customer satisfaction—“anticipates and responds to customer needs in a professional, effective, and timely manner.” NASA requires all senior executives to include a set of competency-based critical elements in their individual performance plans. Based on our review of NASA’s senior executives’ performance plans, we found all of the plans included the following examples of competencies. Organizational results—Understands the principles of the President’s Management Agenda and actively applies them; capitalizes on opportunities to integrate human capital issues in planning and performance and to expand e-government and competitive sourcing; and pursues other opportunities to reduce costs and improve service to customers. Employee perspectives—Demonstrates a commitment to equal opportunity and diversity by proactively implementing programs that positively impact the workplace and NASA’s external stakeholders and through voluntary compliance with equal opportunity laws, regulations, policies, and practices. Customer satisfaction—Provides the appropriate level of high-quality support to peers and other organizations to enable the achievement of the NASA mission; results demonstrate support of One-NASA and that stakeholder and customer issues were taken into account. According to an HHS official, the HHS senior executive performance management system, while not competency based, is becoming more outcome oriented. However, operating divisions may require senior executives to include competencies. For example, senior executives in FDA and CDC include specific competencies related to organizational results, employee perspectives, and customer satisfaction in their individual performance plans. Based on our review of HHS’s senior executives’ performance plans, we found that all of the plans at FDA and CDC and nearly all across HHS identified competencies that executives are expected to demonstrate. Organizational results—About 94 percent of HHS senior executives’ plans identified a competency related to organizational results. For example, all senior executives’ plans in FDA included a competency to “demonstrate prudence and the highest ethical standards when executing fiduciary responsibilities.” Employee perspectives—About 89 percent of HHS senior executives’ plans identified a competency related to employee perspectives. For example, senior executives in CDC are required to include a competency to exercise leadership and management actions that reflect the principles of workforce diversity in management and operations in such areas as recruitment and staffing, employee development, and communications. Customer satisfaction—About 61 percent of HHS senior executives’ plans identified a competency related to customer satisfaction. For example, all senior executives’ plans in FDA included a competency to “lead in a proactive, customer-responsive manner consistent with agency vision and values, effectively communicating program issues to external audiences.” High-performing organizations seek to create pay, incentive, and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. These organizations recognize that valid, reliable, and transparent performance management systems with reasonable safeguards for employees are the precondition to such an approach. To this end, Education’s, HHS’s, and NASA’s performance management systems are designed to appraise and reward senior executive performance based on each executive’s achievement toward organizational goals as outlined in the executive’s performance plan. Overall, the majority of senior executives at each agency either strongly agreed or agreed that they are rewarded for accomplishing the performance expectations in their individual performance plan or helping their agency accomplish its goals, as shown in figure 10. These responses are similar to those from our governmentwide survey on the implementation of GPRA. We reported that about half of senior executives governmentwide perceive to a very great or great extent that employees in their agencies received positive recognition for helping their agencies accomplish their strategic goals. GAO-04-38. safeguards will become especially important under the new performance- based pay system for the SES. Education, HHS, and NASA have built the following safeguards required by OPM into their senior executive performance management policies. Each agency must establish one or more performance review boards (PRB) to review senior executives’ initial summary performance ratings and other relevant documents and to make written recommendations to the agency head on the performance of the agency’s senior executives. The PRBs are to have members who are appointed by the agency head in a way that assures consistency, stability, and objectivity in senior executive performance appraisals. For example, HHS specifically states that each operating division will have one or more PRBs with members appointed by the operating division head. HHS’s PRB members may include all types of federal executives, including noncareer appointees, military officers, and career appointees from within and outside the department. In addition, NASA’s PRB is to evaluate the effectiveness of the senior executive performance management system and report its findings and any appropriate recommendations for process improvement or appropriate policy changes to NASA management. For example, the PRB completed a study on NASA’s senior executive bonus system in 2003. A senior executive may provide a written response to his or her initial summary rating that is provided to the PRB. The PRB is to consider this written response in recommending an annual summary rating to the agency head. A senior executive may ask for a higher-level review of his or her initial summary rating before the rating is provided to the PRB. The higher- level reviewer cannot change the initial summary rating, but may recommend a different rating to the PRB that is shared with the senior executive and the supervisor. Upon receiving the annual summary rating, senior executives may not appeal their performance appraisals and ratings. We have observed that a safeguard for performance management systems is to ensure reasonable transparency and appropriate accountability mechanisms in connection with the performance management process. Agencies can help create transparency in the performance management process by communicating the overall results of the performance appraisal cycle to their senior executives. Education, NASA, and HHS officials indicated that they do not make the aggregate distribution of performance ratings or bonuses available to their senior executives. In addition, agencies can communicate the criteria for making performance-based pay decisions and bonus decisions to their senior executives to enhance the transparency of the system. Generally, less than half of the senior executives at each agency reported that they understand the criteria used to award bonuses to a very great or great extent, and some senior executives at each agency reported that they do not understand the criteria at all, as shown in figure 11. High-performing organizations make meaningful distinctions between acceptable and outstanding performance of individuals and appropriately reward those who perform at the highest level. Executive agencies can reward senior executives’ performance in a number of ways: through performance awards or bonuses, nominations for Presidential Rank Awards, or other informal or honorary awards. With the new performance- based pay system for senior executives, agencies are required to have OPM certify and OMB concur that their performance management systems are making meaningful distinctions based on relative performance in order to raise the pay for their senior executives to the highest available level. Recently, the Director of OPM stated that agencies’ SES performance management systems should rely on credible and rigorous performance measurements to make meaningful distinctions based on relative performance in order for the new SES performance-based pay system to succeed. She also noted that while a growing number of agencies have improved in their distributions of SES ratings and awards based on agencies’ fiscal year 2002 rating and bonus data, these data also suggest that more work is needed. Specifically, see the following: Executive branch agencies rated about 75 percent of senior executives at the highest level their systems permit in their performance ratings in fiscal year 2002, the most current year for which data are available from OPM—a decrease from about 84 percent the previous fiscal year. When disaggregating the data by rating system, approximately 69 percent of senior executives received the highest rating under five-level systems in fiscal year 2002 compared to about 76 percent in fiscal year 2001, and almost 100 percent of senior executives received the highest rating under three-level systems in both fiscal years 2001 and 2002. Approximately 49 percent of senior executives received bonuses in fiscal year 2002 compared to about 52 percent the previous fiscal year. At HHS, about 86 percent of senior executives received the highest possible rating in fiscal year 2003 compared with approximately 99 percent in fiscal year 2002. While HHS gives its operating divisions the flexibility to appraise their senior executives’ performance using a three-, four-, or five-level performance management system, most of HHS’s operating divisions, including FDA and CDC, rate their senior executives under a three-level system. Almost all of HHS’s senior executives rated under a three-level system received the highest rating of “fully successful” in fiscal years 2002 and 2003. Approximately 23 percent of senior executives rated under a five-level system received the highest rating of “outstanding” in fiscal year 2003 compared with approximately 94 percent in fiscal year 2002. According to its Chief Human Capital Officer, HHS recognizes that its rating systems do not always allow for distinctions in senior executives’ performance, and it has chosen to focus on the bonus process as the method for reflecting performance distinctions. Senior executive bonuses are to provide a mechanism for distinguishing and rewarding the contributions of top performers, specifically for circumstances in which the individual’s work has substantially improved public health and safety or citizen services. Since the fiscal year 2001 performance appraisal cycle, HHS has restricted the percentage of senior executives’ bonuses to generally no more than one-third of each operating division’s senior executives. HHS, including FDA and CDC, is making progress toward distinguishing senior executive performance through bonuses compared to the percentage of senior executives governmentwide who received bonuses, as shown in table 1. Additionally, HHS generally limited individual bonus amounts to no more than 12 percent of base pay for top performers in fiscal year 2003. Most of the senior executives who received a bonus were awarded less than a 10 percent bonus in fiscal year 2003, as shown in table 2. Lastly, senior executive responses to our survey indicated that they did not feel HHS was making meaningful distinctions in ratings or bonuses to a very great or great extent. Approximately 31 percent of senior executives felt that their agency makes meaningful distinctions in performance using ratings; approximately 38 percent felt that their agency makes meaningful distinctions in performance using bonuses. NASA uses a five-level system to appraise senior executive performance. More than three-fourths of the senior executives received the highest rating of “outstanding” for the 2003 performance appraisal cycle (July 2002–June 2003), as shown in figure 12. The distribution of senior executives across the rating levels was similar to the previous performance appraisal cycle. NASA’s senior executive bonus recommendations are to be based solely on exceptional performance as specified and documented in senior executives’ performance plans. While NASA established a fixed allocation of bonuses for its organizations based on the total number of senior executives, an organization can request an increase to its allocation. Sixty percent of eligible senior executives within the organization’s bonus allocation may be recommended for bonuses larger than 5 percent of base pay. For the 2003 appraisal cycle, the percentage of senior executives who received bonuses increased from the previous year, as shown in table 3. An agency official indicated that this increase resulted from a study NASA’s PRB conducted on the senior executive bonus system. The PRB reviewed NASA’s bonus system in the context of OPM’s data on senior executive bonuses across federal agencies and recommended that NASA revise its bonus system to move NASA into the upper half of the number and average amount of bonuses given across federal agencies. According to the PRB study, NASA made this change to meet its management’s need to reward more senior executives while recognizing that bonus decisions must be based on performance. During NASA’s 2003 appraisal cycle, the Space Shuttle Columbia accident happened. We reviewed the aggregate senior executive performance rating and bonus data for that cycle; however, we did not analyze individual senior executives’ performance appraisals or bonus recommendations or determine if those who received ratings of outstanding, bonuses, or both were involved with the Columbia mission. Lastly, senior executive responses to our survey indicated that about half of the executives felt NASA was making meaningful distinctions in ratings or bonuses to a very great or great extent. Approximately 46 percent of senior executives felt that their agency makes meaningful distinctions in performance using ratings; approximately 48 percent felt that their agency makes meaningful distinctions in performance using bonuses. Education uses a three-level rating system. About 98 percent of senior executives received the highest rating of “successful” in the 2003 performance appraisal cycle (July 2002–June 2003), a slight decrease from the previous performance appraisal cycle when all senior executives received this rating. Education’s senior executive bonus recommendations are to be based on senior executives’ demonstrated results and accomplishments toward the department’s strategic goals and organizational priorities. About 63 percent of senior executives received bonuses in the 2003 appraisal cycle, compared to approximately 60 percent in the previous appraisal cycle. The majority of the senior executives who received bonuses were awarded 5 percent bonuses in the 2003 appraisal cycle, as shown in table 4. Lastly, senior executive responses to our survey indicated that they did not feel Education was making meaningful distinctions in ratings or bonuses to a very great or great extent. Specifically, about 10 percent of senior executives felt that their agency makes meaningful distinctions in performance using ratings; about 33 percent felt that their agency makes meaningful distinctions in performance using bonuses. High-performing organizations have found that actively involving employees and stakeholders when developing or refining results-oriented performance management systems helps improve employees’ confidence and belief in the fairness of the system and increase their understanding and ownership of organizational goals and objectives. Further, to maximize the effectiveness of their performance management systems these organizations recognize that they must conduct frequent training for staff members at all levels of the organization. Generally, at Education, HHS, and NASA senior executives became involved in refining the performance management system or participated in formal training on those systems when provided the opportunities. Of the senior executives at each agency who reported that they have been given the opportunity to be involved in refining their agency’s performance management system to at least a small extent, most of these senior executives said they took advantage of this opportunity, as shown in figure 13. Similarly, while less than three-fourths of the senior executives at each agency said formal training on their agency’s performance management system is available to them, most of these senior executives said they participated in the training, as shown in figure 14. At all three agencies, a proportion of senior executives reported that they had no opportunity to become involved with or trained on their performance management systems. At HHS, about 38 percent of senior executives said they did not have the opportunity to be involved in refining their agency’s system, while about 24 percent of senior executives said formal training on their agency’s system was not available to them, as shown in figure 15. According to an HHS official, the Office of the Secretary developed the One-HHS objectives, the basis of its senior executive performance management system, with input from the leadership of all HHS staff offices and operating divisions. This official indicated that HHS conducted extensive interviews to develop and validate the goals. All career senior executives were briefed on the goals and offered training on development of outcome-oriented individual performance objectives derived from those goals. The agency official said that the operating divisions had the flexibility to involve their senior executives in customizing the new individual performance plans for their operating divisions. According to HHS’s guidance, the operating divisions are to develop and provide training on the performance management system to their senior executives on areas such as developing performance plans, conducting progress reviews, writing appraisals, and using appraisals as a key factor in making other management decisions. For example, according to an FDA official, the Human Resources Director briefed all of the senior executive directors on how to cascade the FDA Commissioner’s performance plan into their fiscal year 2002 individual plans and incorporate the One-HHS objectives. FDA does not provide regular training to the senior executives on the performance management system; rather the training is provided as needed and usually on a one-on-one basis when a new senior executive joins FDA. The agency official also stated that because few senior executives are joining the agency, regular training on the system is not as necessary. About half of NASA’s senior executives reported that they did not have the opportunity to be involved in refining their agency’s system, while about 21 percent of senior executives said formal training on their agency’s system was not available to them, as shown in figure 16. According to an agency official, the NASA Administrator worked with the top senior executives to develop a common set of senior executive critical elements and performance requirements that reflect his priorities and are central to ensuring a healthy and effective organization. The Administrator then instructed the senior executives to review the common critical elements and incorporate them into their individual performance plans. When incorporating the elements into their individual plans, the senior executives have the opportunity to modify the performance requirements for each element to more clearly reflect their roles and responsibilities. According to NASA’s guidance, the centers and offices are to provide training and information on the performance management system to their senior executives. In addition, an official at NASA said that most centers and offices provide training to new senior executives on aspects of the performance management system, such as developing individual performance plans. Also, NASA provides training courses for all employees on specific aspects of performance management, such as writing performance appraisals and self-assessments. Approximately half of Education’s senior executives reported that they did not have the opportunity to be involved in refining their agency’s system, while about one-fourth of the senior executives reported that formal training on their agency’s system was not available to them, as shown in figure 17. An official at Education indicated that senior executives have the opportunity to comment on changes proposed to the performance management system by the Executive Resources Board. In addition, according to Education’s guidance, training for all senior executives on the performance management system is to be provided periodically. An agency official said that Education provided training for all managers, including senior executives, on how to conduct performance appraisals and write performance expectations near the end of the performance appraisal cycle last year. The experience of successful cultural transformations in large public and private organizations suggests that it can often take 5 to 7 years until such initiatives are fully implemented and cultures are transformed in a substantial manner. We reported that among the key practices consistently found at the center of successful transformations is to use the performance management system to define responsibility and assure accountability for change. The average tenure of political leadership can have critical implications for the success of those initiatives. Specifically, in the federal government the frequent turnover of the political leadership has often made it difficult to obtain the sustained and inspired attention required to make needed changes. We reported that the average tenure of political appointees governmentwide for the period 1990–2001 was just under 3 years. Performance management systems help provide continuity during these times of transition by maintaining a consistent focus on a set of broad programmatic priorities. Individual performance plans can be used to clearly and concisely outline top leadership priorities during a given year and thereby serve as a convenient vehicle for new leadership to identify and maintain focus on the most pressing issues confronting the organization as it transforms. We have observed that a specific performance expectation in senior executives’ performance plans to lead and facilitate change during transitions could be critical as organizations transform themselves to succeed in an environment that is more results oriented, less hierarchical, and more integrated. While many senior executives at each agency reported that their agency’s senior executive performance management system helped to maintain a consistent focus on organizational goals during transitions, the majority of senior executives felt this occurred to a moderate extent or less, as shown in figure 18. According to an agency official, HHS as a whole struggles with transitions between secretaries as with each change in leadership comes a change in initiatives. Approximately 25 percent of HHS senior executives’ plans identified performance expectations related to leading and facilitating change in the organization. For example, several senior executives’ plans identified actions the executives were going to take in terms of succession planning and leadership development for their organizations. Specifically, a senior executive in the National Institutes of Health set the expectation to develop a workforce plan that supports the future needs of the office, including addressing such things as succession and transition planning. About 33 percent of senior executives’ plans in FDA and 15 percent in CDC identified performance expectations related to leading and facilitating change. To help address this issue of continuity in leadership and transitions, HHS identified as part of its One-HHS objectives a goal to “implement strategic workforce plans that improve recruitment, retention, hiring and leadership succession results for mission critical positions.” Education requires all senior executives to include a general performance expectation in their performance plans related to change: “initiates new and better ways of doing things; creates real and positive change.” Approximately 98 percent of the senior executives’ plans included this expectation. Almost none of the NASA senior executives’ performance plans identified an expectation related to leading and facilitating change during transitions. An agency official indicated that while NASA did not set a specific expectation for senior executives to include in their individual performance plans, leading and facilitating change is addressed through several of the critical elements. For example, for the “Health of NASA” critical element, senior executives are to demonstrate actions that contribute to safe and successful mission accomplishment and facilitate knowledge sharing within and between programs and projects. We have reported that NASA recognizes the importance of change management through its response to the Columbia Accident Investigation Board’s findings. NASA indicated that it would increase its focus on the human element of change management and organizational development, among other things, to improve the agency’s culture. Senior executives need to lead the way for federal agencies to transform their cultures to be more results oriented, customer focused, and collaborative in nature to meet the challenges of the 21st century. Performance management systems can help manage and direct this transformation process. Education, HHS, and NASA have undertaken important and valuable efforts, but these agencies need to continue to make substantial progress in using their senior executive performance management systems to strengthen the linkage between senior executive performance and organizational success through the key practices for effective performance management. Consistent with our findings and OPM’s reviews across the executive branch, these agencies must use their career senior executive performance management systems as strategic tools. In addition, as the administration is about to implement a performance-based pay system for the SES, valid, reliable, and transparent performance management systems with reasonable safeguards are critical. The experiences and progress of Education, HHS, and NASA should prove helpful to those agencies as well as provide valuable information to other agencies as they seek to use senior executive performance management as a tool to drive internal change and achieve external results. Overall, we recommend that the Secretaries of Education and HHS and the Administrator of NASA continue to build their career senior executive performance management systems along the nine key practices for effective performance management. Specifically, we recommend the following. The Secretary of Education should reinforce these key practices by taking the following seven actions: Require senior executives to set specific levels of performance that are linked to organizational goals to help them see how they directly contribute to organizational goals. Require senior executives to identify in their individual performance plans programmatic crosscutting goals that would require collaboration to achieve and clearly identify the relevant internal or external organizations with which they would collaborate to achieve these goals. Provide disaggregated performance information from various sources to help facilitate senior executive decision making and progress in achieving organizational results, customer satisfaction, and employee perspectives. Require senior executives to take follow-up actions based on the performance information available to them in order to make programmatic improvements, and formally recognize executives for these actions. Build in additional safeguards when linking pay to performance by communicating the overall results of the performance management decisions. Make meaningful distinctions in senior executive performance through both ratings and bonuses. Involve senior executives in future refinements to the performance management system and offer training on the system, as appropriate. The Secretary of HHS should reinforce these key practices by taking the following seven actions: Require senior executives to clearly identify in their individual performance plans the relevant internal or external organizations with which they would collaborate to achieve programmatic crosscutting goals. Provide disaggregated performance information from various sources to help facilitate senior executive decision making and progress in achieving organizational results, customer satisfaction, and employee perspectives. Require senior executives to take follow-up actions based on the performance information available to them in order to make programmatic improvements, and formally recognize executives for these actions. Build in additional safeguards when linking pay to performance by communicating the overall results of the performance management decisions. Make meaningful distinctions in senior executive performance through ratings. Involve senior executives in future refinements to the performance management system and offer training on the system, as appropriate. Set specific performance expectations for senior executives related to leading and facilitating change management initiatives during ongoing transitions throughout the organization that executives should include in their individual performance plans. The Administrator of NASA should reinforce these key practices by taking the following eight actions: Require senior executives to set specific levels of performance that are linked to organizational goals to help them see how they directly contribute to organizational goals. Require senior executives to identify in their individual performance plans programmatic crosscutting goals that would require collaboration to achieve and clearly identify the relevant internal or external organizations with which they would collaborate to achieve these goals. Provide disaggregated performance information from various sources to help facilitate senior executive decision making and progress in achieving organizational results, customer satisfaction, and employee perspectives. Require senior executives to take follow-up actions based on the performance information available to them in order to make programmatic improvements, and formally recognize executives for these actions. Build in additional safeguards when linking pay to performance by communicating the overall results of the performance management decisions. Make meaningful distinctions in senior executive performance through both ratings and bonuses. Involve senior executives in future refinements to the performance management system and offer training on the system, as appropriate. Set specific performance expectations for senior executives related to leading and facilitating change management initiatives during ongoing transitions throughout the organization that executives should include in their individual performance plans. We provided a draft of this report to the Secretaries of Education and HHS and the Administrator of NASA for their review and comment. We also provided a draft of the report to the Directors of OPM and OMB for their information. We received written comments from Education, HHS, and NASA, which are presented in appendixes IV, V, and VI. NASA’s Deputy Administrator stated that the draft report is generally positive and that NASA concurred with all the recommendations and plans to implement them in its next SES appraisal cycle beginning July 1, 2004. HHS’s Acting Principal Deputy Inspector General stated that HHS had no comments upon review of the draft report. In responding to our recommendations, Education’s Assistant Secretary for Management and Chief Information Officer stated that Education plans to revise its existing senior executive performance management system dramatically given OPM’s draft regulations for the new SES pay for performance system and described specific actions Education plans to take. These actions are generally consistent with our recommendations and their successful completion will be important to achieving the intent of our recommendations. However, Education stated that it does not plan to require the specific identification of the internal/external organizations with which the executives collaborate, as we recommended. We disagree that Education does not need to implement this recommendation. Education is taking important steps by requiring senior executives to include a general performance expectation related to collaboration and teamwork in their individual performance plans, but placing greater emphasis on this expectation is especially important for Education. We reported that Education will have to help states and school districts meet the goals of congressional actions such as the No Child Left Behind Act. Consequently, Education should require senior executives to identify the crosscutting goals and relevant organizations with which they would collaborate to achieve them in order to help reinforce the necessary focus on results. Lastly, Education stated that it has fully implemented our recommendation for providing senior executives disaggregated performance information from various sources to help facilitate decision making and progress in achieving organizational priorities. We disagree that Education has fully implemented this recommendation. While we recognize Education’s two sources of agencywide performance information for its senior executives, we also reported that only about one-third of the senior executives who reported that the agency provided performance information felt that the performance information was useful for making improvements and available when needed to a very great or great extent. Consequently, Education should provide all of its senior executives performance information from various sources that is disaggregated in a useful format to help them track their progress toward achieving organizational results and other priorities, such as customer satisfaction and employee perspectives. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will provide copies of this report to other interested congressional parties, the Secretaries of Education and HHS, the Administrator of NASA, and the Directors of OPM and OMB. We will also make this report available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me or Lisa Shames on (202) 512-6806 or at [email protected] or [email protected]. Other contributors are acknowledged in appendix VII. To meet our objective to assess how well selected agencies are creating linkages between senior executive performance and organizational success through their performance management systems, we applied the key practices we previously identified for effective performance management. We focused on agencies’ career Senior Executive Service (SES) members, rather than all senior-level officials, because the Office of Personnel Management (OPM) collects data on senior executives across the government. In addition, career senior executives are common to all three of the selected agencies and typically manage programs and supervise staff. We selected the Department of Education, the Department of Health and Human Services (HHS), and the National Aeronautics and Space Administration (NASA) for our review to reflect variations in mission, size, organizational structure, and use of their performance management systems for career senior executives. Within HHS, we selected two of the operating divisions—the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC)—to determine how HHS’s SES performance management system cascades down to the operating division level. We selected these two operating divisions after reviewing HHS’s strategic plan and its operating divisions’ annual performance plans to identify two agencies that contributed to the same HHS strategic goal(s) through their annual performance goals. We then reviewed the SES population data from OPM’s Central Personal Data File to verify that the two operating divisions each had a relatively large number of senior executives. We collected and analyzed each agency’s senior executive performance management system policy manual; personnel policies and memorandums; strategic plan and annual performance plan; employee and customer satisfaction survey instruments and analyses, as appropriate; and aggregate trend data for senior executive performance ratings and bonus distributions. In addition, we reviewed OPM’s draft proposed regulations prescribing the criteria agencies must meet to obtain certification of their systems, which OPM provided for review and comment to the heads of departments and agencies, including GAO, on April 28, 2004. We also assessed the reliability of the senior executive performance rating and bonus data provided by Education, HHS, NASA, and OPM to ensure that the data we used for this report were complete and accurate by (1) performing manual and electronic testing of required data elements; (2) comparing the data to published OPM data, when applicable; and (3) interviewing agency officials knowledgeable about the data. We determined that the data provided by the agencies and OPM were sufficiently reliable for the purposes of this report. We also interviewed the chief human capital officers at Education and HHS as well as officials at all three agencies responsible for managing human capital; implementing the strategic and annual performance plans; and administering agencywide employee and customer satisfaction surveys, as appropriate, and other agency officials identified as having a particular knowledge about issues related to senior executive performance management. In addition, we met with the President of the Senior Executives Association to obtain her thoughts on the new SES performance-based pay structure and performance management in general. We assessed a probability sample of SES individual performance plans at HHS and NASA and all the SES plans at Education using a data collection instrument we prepared in order to identify how senior executives were addressing certain practices—aligning individual performance expectations with organizational goals, connecting performance expectations to crosscutting goals, using competencies, and maintaining continuity during transitions—through their individual performance plans. To randomly select the plans, we collected a list of all current career senior executives as of August/September 2003 from each agency. Since HHS’s operating divisions develop their own SES performance plans and implement their performance management systems, we drew the sample such that it would include each operating division and be representative of all of HHS. In addition to the stratified sample for HHS overall, we reviewed all senior executives plans at FDA and CDC to ensure that estimates could be produced for these operating divisions. For all three agencies, we reviewed the individual performance plans most recently collected by the human resources offices. We reviewed plans from the performance appraisal cycle for HHS covering fiscal year 2003, for Education covering July 2002–June 2003, and for NASA covering July 2003–June 2004. We selected and reviewed all senior executives’ individual performance plans from Education, a simple random sample from NASA, and a stratified sample from HHS. The sample of SES performance plans allowed us to estimate characteristics of these plans for each of these three agencies. For each agency, the SES population size, number of SES plans in sample, and number of plans reviewed are shown in table 5. We excluded out of scope cases from our population and sample, which included senior executives who had retired or resigned, were not career senior executives, or did not have individual performance plans because they were either new executives or on detail to another agency. For HHS, excluding CDC and FDA, we do not know the number of out of scope SES plans in the entire senior executive population; however, there were seven out of scope SES plans in our sample of performance plans. For this review, we only estimate to the population of in scope SES plans. All population estimates based on this plan review are for the target population defined as SES performance plans for the most recent year available from each of the three agencies. For Education, we report actual numbers for our review of individual performance plans since we reviewed all the plans. For HHS and NASA, we produced estimates to the population of all SES performance plans in those agencies for the relevant year. Estimates are produced using appropriate methods for simple random sampling for NASA and for stratified random sampling for HHS. For NASA and for each stratum for HHS, we formed estimates by weighting the data by the ratio of the population size to the number of plans reviewed. For NASA, we considered the 81 plans obtained and reviewed to be a probability sample. The HHS and NASA performance plan samples are subject to sampling error. There was no sampling error for the census review of senior executives’ performance plans for FDA, CDC, and Education. The effects of sampling errors, due to the selection of a sample from a larger population, can be expressed as confidence intervals based on statistical theory. Sampling errors occur because we use a sample to draw conclusions about a larger population. As a result, the sample was only one of a large number of samples of performance plans that might have been drawn. If different samples had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. The 95 percent confidence intervals are expected to include the actual results for 95 percent of samples of this type. We calculated confidence intervals for this sample using methods that are appropriate for the sample design used. For HHS estimates in this report, we are 95 percent confident that when sampling error is considered, the results we obtained are within +9 percentage points of what we would have obtained if we had reviewed the plans of the entire study population, unless otherwise noted. For NASA, the 95 percent confidence intervals for percentage estimates are no wider than +6 percentage points, unless otherwise noted. We administered a Web-based questionnaire to the study population of all career senior executives at Education, HHS, and NASA to obtain information on their experiences with and perceptions of their performance management systems. We collected a list of all career senior executives and e-mail addresses from each agency as of August/September 2003 to identify the respondents for our survey. We structured the questionnaire around the key practices we identified for effective performance management and included some questions about senior executives’ overall perceptions of their performance management systems. The questions were nearly identical across the agencies, though some introductory language and terminology varied. The complete questionnaire and results are shown in appendix II. Although all senior executives were sampled, in the implementation of the survey, we found that some executives were out of scope because they retired or resigned, were not career senior executives, or otherwise did not respond. Table 6 contains a summary of the survey disposition for the surveyed cases at the three agencies. Table 7 summarizes why individuals originally included in the target population by each agency were removed from the sample. For Education, we surveyed a total of 57 career senior executives and received completed questionnaires from 41 senior executives for a response rate of 72 percent. For HHS, we surveyed a total of 317 career senior executives and received completed questionnaires from 213 senior executives for a response rate of 67 percent. For NASA, we surveyed a total of 393 career senior executives and received completed questionnaires from 260 senior executives for a response rate of 66 percent. We obtained responses from across Education and from all subentities within HHS and NASA and had no reason to expect that the views of nonrespondents might be different from the respondents. Consequently, our analysis of the survey data treats the respondents as a simple random sample of the populations of senior executives at each of the three agencies. We also reviewed whether senior executives who have served less than 1 year at an agency tended to respond differently than those with more than 1 year of experience. We did find some differences on certain questions for which individuals who served as senior executives for less than 1 year were more likely to answer “no basis to judge/not applicable” and noted these differences in the report. The estimated percentage of the senior executives responding “no basis to judge/not applicable” to questions ranged from 0 to 24 percent. Since this range is relatively wide, we have reported “no basis to judge/not applicable” as a separate response category for each question in appendix II. The particular sample of senior executives (those who responded to the survey) we obtained from each agency was only one of a large number of such samples of senior executives that we might have obtained. Each of these different samples might have produced slightly different results. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. For Education, unless otherwise noted, the survey responses have a margin of error within ± 9 percent with a 95 percent level of confidence. For HHS and NASA, unless otherwise noted, the survey responses have a margin of error within ± 4 percent with a 95 percent level of confidence. In addition to sampling error, other potential sources of errors associated with surveys, such as question misinterpretation, may be present. Nonresponse may also be a source of nonsampling error. We took several steps to reduce these other sources of error. We conducted pretests of the questionnaire both with appropriate senior executives in GAO and senior executives in the three agencies surveyed to ensure that the questionnaire (1) was clear and unambiguous, (2) did not place undue burden on individuals completing it, and (3) was independent and unbiased. We pretested a paper copy of the survey with three senior executives in GAO who did not work in the human capital area. We then had a human resources professional with each agency review the survey for agency-specific content and language. We conducted six pretests overall with senior executives in the audited agencies—one at Education, three at HHS, and two at NASA. The first four were conducted using a paper version of the questionnaire and the final two were conducted using the Web version. To increase the response rate for each agency, we sent a reminder e-mail about the survey to those senior executives who did not complete the survey in the initial time frame and conducted follow-up telephone calls to persons who had not completed the survey following the reminder e-mail. The HHS and NASA surveys were available from October 22, 2003, through January 16, 2004, and the Education survey was available from November 3, 2003, through January 16, 2004. We performed our work in Washington, D.C., from August 2003 through March 2004 in accordance with generally accepted government auditing standards. We administered a Web-based questionnaire to the study population of all career senior executives at Education, HHS, and NASA to obtain information on their experiences with and perceptions of their performance management systems. We structured the questionnaire around key practices we identified for effective performance management. The response rates and margins of error for each agency are as follows. For Education, we surveyed a total of 57 career senior executives and received completed questionnaires from 41 senior executives for a response rate of 72 percent. Unless otherwise noted, the survey responses have a margin of error within ± 9 percent with a 95 percent level of confidence. For HHS, we surveyed a total of 317 career senior executives and received completed questionnaires from 213 senior executives for a response rate of 67 percent. Unless otherwise noted, the survey responses have a margin of error within ± 4 percent with a 95 percent level of confidence. For NASA, we surveyed a total of 393 career senior executives and received completed questionnaires from 260 senior executives for a response rate of 66 percent. Unless otherwise noted, the survey responses have a margin of error within ± 4 percent with a 95 percent level of confidence. The information below shows the senior executives’ responses for each question by agency. You see a connection between your daily activities and the achievement of organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You communicate your performance expectations to the individuals who report to you to help them understand how they can contribute to organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You see a connection between your daily activities and HHS's priorities. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) You collaborate with others to achieve crosscutting goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You identify strategies for collaborating with others to achieve crosscutting goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You are recognized through your performance management system for contributing to crosscutting goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Do you collaborate with other offices within Education to achieve crosscutting goals? Does not apply given my current position. (percent) 5 Do you collaborate with other agencies or organizations outside of Education to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other operating divisions within HHS to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other agencies or organizations outside of HHS to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other centers within NASA to achieve crosscutting goals? Does not apply given my current position. (percent) Do you collaborate with other agencies or organizations outside of NASA to achieve crosscutting goals? Your agency formally provides performance information that allows you to track your work unit's performance. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency formally provides performance information that allows you to compare the performance of your work unit to that of other work units. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency formally provides performance information that allows you to compare the performance of your work unit to that of your agency. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Your agency formally provides performance information that is available to you when you need it. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency formally provides performance information that is useful for making improvements in your work unit's performance. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You identified areas for improvement based on performance information formally provided by your agency. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You took action on any identified areas of improvement. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) You documented areas for improvement in your individual performance plan. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You are recognized through your performance management system for taking follow-up actions. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) The competencies you demonstrate help you contribute to the organization's goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You are recognized through your performance management system for your demonstration of the competencies. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) I am rewarded for accomplishing the performance expectations identified in my individual performance plan. Agree (percent) Neither agree or disagree (percent) Disagree (percent) Strongly disagree (percent) No basis to judge / Not applicable (percent) I am rewarded for helping my agency accomplish its goals. Agree (percent) Neither agree or disagree (percent) Disagree (percent) Strongly disagree (percent) No basis to judge / Not applicable (percent) You understand the criteria used to award bonuses (e.g., cash awards). To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You understand the criteria used to award pay level adjustments (e.g., an increase from SES level 1 to level 2). To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Pay level adjustments are dependent on an individual's contribution to the organization's goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Bonuses are dependent on an individual's contribution to the organization's goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system uses performance ratings to make meaningful distinctions between acceptable and outstanding performers. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system uses bonuses to make meaningful distinctions between acceptable and outstanding performers. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Your agency uses performance information and documentation to make distinctions in senior executive performance. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency provides candid and constructive feedback that allows you to maximize your contribution to organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You have been given the opportunity to be involved in refining your agency's SES performance management system. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You have been involved in refining your agency's SES performance management system. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Formal training on your agency's SES performance management system is available to you. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) You have participated in formal training on your agency's SES performance management system. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your overall involvement in the SES performance management system has increased your understanding of it. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system is used as a tool to manage the organization. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Your agency's SES performance management system is used in achieving organizational goals. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system holds you accountable for your contributions to organizational results. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system facilitates discussions about your performance as it relates to organizational goals during the year. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) No basis to judge / Not applicable (percent) Your agency's SES performance management system helps to maintain a consistent focus on organizational goals during transitions, such as changes in leadership (at any level) and change management initiatives. To a great extent (percent) To a moderate extent (percent) To a small extent (percent) To no extent (percent) Education required all of its senior executives to include three critical elements in their individual performance plans for the 2003 performance appraisal cycle (July 2002–June 2003). The critical elements and examples of the related individual and organizational performance requirements include the following. Leadership, management, and coaching: Takes leadership in promoting and implementing the department’s mission, values, and goals; develops and communicates a clear, simple, customer-focused vision/direction for the organization and customers that is consistent with the department’s mission and strategic goals; fosters improved workforce productivity and effective development and recognition of employees; and promotes collaboration and teamwork, including effective union-management relations, where appropriate. Work quality, productivity, and customer service: Produces or assures quality products that are useful and succinct, that identify and address problems or issues, and that reflect appropriate analysis, research, preparation, and sensitivity to department priorities and customer needs; anticipates and responds to customer needs in a professional, effective, and timely manner; initiates new and better ways of doing things; and creates real and positive change. Job specifics: Senior executives are to include performance expectations that are applicable to their individual positions and support their principal offices’ goals as well as the department’s strategic goals and priorities, including the President’s Management Agenda, the Blueprint for Management Excellence, and the Culture of Accountability. Education sets guidelines for its offices to follow in appraising performance and recommending senior executives for bonuses. The senior executive performance appraisals are to be based on demonstrated results related to Education’s goals and priorities, including the President’s Management Agenda, the Blueprint for Management Excellence, the Culture of Accountability, and the Secretary’s strategic plan. In addition, the senior executive’s appraisal is to be based on both individual and organizational performance, taking into account results achieved in accordance with the department’s strategic plan and goals, which are developed in accordance with the Government Performance and Results Act of 1993 (GPRA); the effectiveness, productivity, and performance quality of the employees for whom the senior executive is responsible; and equal employment opportunity and diversity and complying with merit systems principles. In addition, the responses of the customers, coworkers, and employees through the automated performance feedback process are to be considered in determining the senior executive’s performance rating. Senior executives must receive a performance rating of “successful” to be eligible for a bonus. Bonus recommendations are to be based on the senior executive’s demonstrated results and accomplishments toward the department’s strategic goals and organizational priorities. Accomplishments should demonstrate how Education’s achievements could not have been possible without the senior executive’s leadership and contribution. HHS required its senior executives to set measurable, specific performance expectations in their fiscal year 2003 individual performance plans (or performance contracts) that align with HHS’s strategic goals, the “One- HHS” management and program objectives, and their operating divisions’ annual performance goals. According to agency officials, senior executives are to choose the One-HHS objectives and strategic and annual performance goals that relate to their job responsibilities, and tailor their individual performance expectations to reflect these responsibilities in their performance plans. The One-HHS objectives, which reflect the program and management priorities of the Secretary, include the following. Management objectives: The purpose of the objectives is to better integrate HHS management functions to ensure coordinated, seamless, and results-oriented management across all operating and staff divisions of the department. 1. Implement results-oriented management. 2. Implement strategic human capital management. 3. Improve grants management operation and oversight. 4. Complete the fiscal year 2003 competitive sourcing program. 5. Improve information technology management. 6. Administrative efficiencies. 7. Continue implementation of unified financial management system. 8. Consolidate management functions. 9. Achieve efficiencies through HHS-wide procurements. 10. Conduct program evaluations and implement corrective strategies for any deficiencies identified. Program objectives: The purpose of the objectives is to enhance the health and well-being of Americans by providing for effective health and human services and by fostering strong, sustained advances in the sciences underlying medicine, public health, and social services. 1. Increase access to health care (Closing the Gaps in Health Care). 2. Expand consumer choices in health care and human services. 3. Emphasize preventive health measures (Preventing Disease and Illness). 4. Prepare for and effectively respond to bioterrorism and other public health emergencies (Protecting Our Homeland). 5. Improve health outcomes (Preventing Disease and Illness). 6. Improve the quality of health care (21st Century Health Care). 7. Advance science and medical research (Improving Health Science). 8. Improve the well-being and safety of families and individuals, especially vulnerable populations (Leaving No Child Behind). 9. Strengthen American families (Working Toward Independence). 10. Reduce regulatory burden on providers, patients, and consumers of HHS’s services. In addition to the annual performance goals, operating divisions may have their senior executives include specific individual performance expectations in their performance plans. According to an agency official, the senior executives in FDA have set expectations in their plans that are relevant to the work in their centers. For example, the senior executives who work on issues related to mad cow disease in the Center for Veterinary Medicine have included goals related to this type of work in their individual performance plans. HHS sets general guidance for operating divisions to follow when appraising senior executive performance and recommending senior executives for bonuses and other performance awards, such as the Presidential Rank Awards. Overall, a senior executive’s performance is to be appraised at least annually based on a comparison of actual performance with expectations in the individual performance plan. The operating divisions are to appraise senior executive performance taking into account such factors as measurable results achieved in accordance with the goals of GPRA; customer satisfaction; the effectiveness, productivity, and performance quality of the employees for whom the executive is responsible; and meeting affirmative action, equal employment opportunity, and diversity goals and complying with the merit systems principles. In recommending senior executives for bonuses, operating divisions are to consider each senior executive’s performance, including the rating and the extent of the executive’s contributions to meeting organizational goals. Senior executives who receive ratings of “fully successful” are eligible to be considered for bonuses. For fiscal year 2003, bonuses generally were to be recommended for no more than one-third of the operating division’s senior executives and awarded to only the exceptional performers. Operating divisions were to consider nominating only one or two of their very highest contributors for the governmentwide Presidential Rank Awards. The greatest consideration for bonuses and Presidential Rank Awards was to be given to executives in frontline management positions, with direct responsibility for HHS’s programs. NASA requires its senior executives to include seven critical elements, which reflect the Administrator’s priorities and NASA’s core values of safety, people, excellence, and integrity, in their individual performance plans for the 2004 performance appraisal cycle (July 2003–June 2004). Senior executives may modify the related performance requirements by making them more specific to their jobs. These seven critical elements and the related performance requirements are as follows. The President’s Management Agenda: Understands the principles of the President’s Management Agenda and actively applies them; assures maximum organizational efficiency, is customer focused, and incorporates presidential priorities in budget and performance plans; capitalizes on opportunities to integrate human capital issues in planning and performance and expand electronic government and competitive sourcing; and pursues other opportunities to reduce costs and improve service to customers. Performance requirement: Applicable provisions of the agency human capital plan are implemented; financial reports are timely and accurate; clear measurable programmatic goals and outcomes are linked to the agency strategic plan and the GPRA performance plan; and human capital, e-government, and competitive sourcing goals are achieved. Health of NASA: Actions contribute to safe and successful mission accomplishment and/or strengthen infrastructure of support functions; increases efficient and effective management of the agency; facilitates knowledge sharing within and between programs and projects; and displays unquestioned personal integrity and commitment to safety. Performance requirement: Demonstrates that safety is the organization’s number one value; actively participates in safety and health activities, supports the zero lost-time injury goals, and takes action to improve workforce health and safety; meets or exceeds cost and schedule milestones and develops creative mechanisms and/or capitalizes on opportunities to facilitate knowledge sharing; and achieves maximum organizational efficiency through effective resource utilization and management. Equal opportunity (EO) and diversity: Demonstrates a commitment to EO and diversity by proactively implementing programs that positively impact the workplace and NASA’s external stakeholders and through voluntary compliance with EO laws, regulations, policies, and practices; this includes such actions as ensuring EO in hiring by providing, if needed, reasonable accommodation(s) to an otherwise qualified individual with a disability or ensuring EO without regard to race, color, national origin, sex, sexual orientation, or religion in all personnel decisions and in the award of grants or other federal funds to stakeholder recipients. Performance requirement: Actively supports EO/diversity efforts; consistently follows applicable EO laws, regulations, Executive Orders, and administration and NASA policies, and the principles thereof, in decision making with regard to employment actions and the award of federal grants and funds; cooperates with and provides a timely and complete response to NASA’s Discrimination Complaints Division, the U.S. Equal Employment Opportunity Commission, and the courts during the investigation, resolution, and/or litigation of allegations of illegal discrimination under applicable EO laws and regulations. Collaboration: Integrates One-NASA approach to problem solving, program/project management, and decision making; leads by example by reaching out to other organizations and NASA centers to collaborate on work products; seeks input and expertise from a broad spectrum; and demonstrates possession of organizational and interpersonal skills. Performance requirement: Provides the appropriate level of high- quality support to peers and other organizations to enable the achievement of the NASA mission; results demonstrate support of One-NASA and that stakeholder and customer issues were taken into account. Professional development: Has a breadth of experience in different organizations, agencies, functional areas, and/or geographic locations; demonstrates continual learning in functional and leadership areas, for example, through advanced education/training or participating in seminars; encourages and supports development and training of assigned staff; and where feasible, seeks, accepts, and encourages opportunities for developmental assignments in other functional areas and elsewhere in NASA, with a focus on broadening agencywide perspective. Performance requirement: Participates in training/learning experiences appropriate to position responsibilities and to broaden agencywide perspective and actively plans for and supports the participation of subordinate staff in training and development activities. Meets program objectives: Meets and advances established agency program objectives and achieves high-quality results; demonstrates the ability to follow through on commitments; and individual fits into long- term human capital strategy and could be expected to make future contributions at a higher level or in a different capacity at the same level. Performance requirement: Meets appropriate GPRA/NASA strategic plan goals and objectives; customers recognize results for their high- quality and responsiveness to requirements/agreements. Implements a fair and equitable performance-based system within organizational component (applicable only for supervisory positions): Implements/utilizes a fair, equitable, and merit/performance-based process/system for the evaluation of individuals for bonuses, promotions, career advancements, and general recognition. Performance requirement: System reflects the key leadership, teamwork, and professional excellence on which decisions are based; results have credibility with supervisors, subordinates, and peers. NASA provides guidance for the centers and offices to follow in appraising senior executive performance and recommending executives for bonuses or other performance awards, such as Presidential Rank Awards or incentive awards. The senior executive’s performance appraisal is to focus on results toward the performance requirements specified in the individual performance plan, specifically the achievements that address the agency’s goals rather than the quality of effort expended. In addition, senior executive appraisals are to be based on individual and organizational performance, taking into account such factors as results achieved in accordance with the goals of GPRA; the effectiveness, productivity, and performance of assigned employees; meeting safety and diversity goals; complying with merit system principles; customer perspective focusing on customer needs, expectations, and employee perspective focusing on employee needs, such as training, internal processes, and tools to successfully and efficiently accomplish their tasks; and business perspective focusing on outcomes and the social/political impacts that define the role of the agency and the business processes needed for organizational efficiency and effectiveness. In considering customer, employee, and other stakeholder perspectives for senior executive appraisals, rating officials may use formal mechanisms, such as surveys, or less formal mechanisms, such as unsolicited customer and employee feedback, and analysis of personnel data, such as turnover rates, diversity reports, grievances, and workforce awards and recognition. All senior executives with annual summary ratings of “fully successful” or higher are eligible to be considered for bonuses. Bonus recommendations are to be based solely on exceptional performance as specified and documented in the senior executive’s performance plan. In addition to the individuals named above, Janice Lichty Latimer, Erik Hallgren, Ronald La Due Lake, Mark Ramage, Nyree M. Ryder, and Jerry Sandau made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Congress and the administration have established a new performance-based pay system for members of the Senior Executive Service (SES) that is designed to provide a clear and direct linkage between SES performance and pay. Also, GAO previously reported that significant opportunities exist for agencies to hold the SES accountable for improving organizational results. GAO assessed how well selected agencies are creating linkages between SES performance and organizational success by applying nine key practices GAO previously identified for effective performance management. GAO selected the Department of Education, the Department of Health and Human Services (HHS), and the National Aeronautics and Space Administration (NASA). Senior executives need to lead the way to transform their agencies' cultures to be more results-oriented, customer focused, and collaborative in nature. Performance management systems can help manage and direct this process. While Education, HHS, and NASA have undertaken important and valuable efforts to link their career SES performance management systems to their organizations' success, there are opportunities to use their systems more strategically. For example, as indicated by the executives themselves, the agencies can better use their performance management systems as a tool to manage the organization or to achieve organizational goals. As Congress and the administration are reforming SES pay to better link pay to performance, valid, reliable, and transparent performance management systems with reasonable safeguards are critical. Information on the experiences and knowledge of these agencies should provide valuable insights to other agencies as they seek to drive internal change and achieve external results.
IDEA is the primary federal law that addresses the special education and related service needs of children with disabilities, including children with specific learning disabilities, sensory disabilities, such as hearing and visual impairments, and other disabilities, such as emotional disturbance and speech or language impairments. The law requires states to provide eligible children with disabilities a free appropriate public education in “the least restrictive environment,” that is, in an educational setting alongside nondisabled children to the maximum extent appropriate. School districts are responsible for identifying students who may have a disability and evaluating them in all areas related to the suspected disability. In addition, they must re-evaluate children at least once every 3 years, or sooner if conditions warrant a re-evaluation, or if the child’s parents or teacher requests a re-evaluation. Under IDEA, students receive special education and related services tailored to their needs through an IEP, which is a written statement developed by a team of educational professionals, parents, and interested parties at meetings regarding the child’s educational program. If the IEP team determines the child needs extended year services, schools are required by regulations governing IDEA to provide such services beyond the normal school year. Further, the act requires that states have in place a comprehensive system of personnel development designed to ensure an adequate supply of special education, regular education, and related services personnel to provide needed services. IDEA seeks to strengthen the role of parents and ensure they have meaningful opportunities to participate in the education of their children. In particular, IDEA regulations require that parents receive prior notification of IEP meetings and that the meetings be scheduled at a mutually agreed upon time and place. The act affords parents other procedural safeguard protections, such as the opportunity to examine their child’s records and to present complaints relating to the identification, evaluation, educational placement of the child, or the provision of a free appropriate public education. Under IDEA, disputes between families and school districts may be resolved through due process hearings, state complaint procedures, or mediation. The Department of Education’s Office of Special Education Programs (OSEP) is responsible for administering IDEA. Education authorizes grants to states, supports research and disseminates best practices, and provides technical assistance to states in implementing the law. Education is also responsible for monitoring states’ compliance with IDEA requirements and ensuring that the law is enforced when noncompliance occurs. Education reviews states’ systems for detecting and correcting noncompliance in the state, including noncompliance at the local level. In the event of noncompliance, Education has the specific authority to employ six sanctions: (1) imposing restrictions or “special conditions” on a state’s IDEA grant award; (2) negotiating a long-term compliance agreement with a state requiring corrective action within 3 years; (3) disapproving a state’s application for funds when the application does not meet IDEA eligibility requirements; (4) obtaining a “cease and desist” order to require a state to discontinue a practice or policy that violates IDEA; (5) withholding IDEA funds in whole or in part depending on the degree of the state’s noncompliance; and (6) referring a noncompliant state to the Department of Justice for appropriate enforcement action. Education’s system for monitoring state compliance with IDEA has been evolving for more than 5 years. This evolution is, in part, in response to the stronger accountability and enforcement provisions in the 1997 amendments to IDEA that emphasized the importance of improving educational outcomes for disabled children, including improving high school graduation rates, increasing placement in regular education settings, increasing participation in statewide and districtwide assessment programs, and improving the outcomes of services provided to students with emotional and behavioral disorders. In 1998, Education implemented the Continuous Improvement Monitoring Process, which focused its monitoring efforts on states with the greatest risk of noncompliance and placed increased responsibility on states for identifying areas of weakness. In 2003, Education implemented the Continuous Improvement and Focused Monitoring System (CIFMS), which, among other things, added new state performance reporting requirements to its monitoring system. Officials of some special education advocacy groups with whom we spoke, including the National Association of State Directors of Special Education, commented favorably on these changes. However, the National Council on Disability, which had published a 2000 study critical of Education’s enforcement of IDEA, continued to question whether Education has taken effective actions to remedy the problems reported. Education uses a risk-based system to focus its monitoring efforts, but some data it uses are weak. Education’s monitoring system relies upon states to collect information about their special education programs, assess their own performances, and report these findings to the department annually. In addition, the department selects a limited number of states for further inspection based on a subset of measures. Because this system relies heavily on state data, the department has taken steps in recent years to ensure that states have adequate data collection systems in place. However, some of the data are not uniformly measured or are difficult for states to collect. Education officials acknowledged that data variability limits the usefulness of the reported information. Some officials in states we visited attributed these variations in data in part to inadequate guidance from Education and expressed a desire for more direction on how to measure and report these data. To assess their own IDEA compliance, states conduct annual special education performance reviews and report their findings to Education. To conduct these reviews, states have undertaken a variety of activities. In particular, states collect data from local districts, including local graduation rates, student placement rates, and parental involvement information, and analyze these data to identify areas of noncompliance at the local level. Additionally, states obtain input from the public about local special education programs through hearings and surveys. States also review dispute resolution processes, including state complaint systems, to determine the type of problems generating complaints and ensure that complaints are being resolved in a timely fashion. In recent years, Education has required states to include groups of stakeholders in the review process, such as parents, advocates, teachers, and administrators from the special education community. State and local officials work with these stakeholders to identify areas in which they may be out of compliance and create detailed improvement plans to remedy these problems. Several state officials we interviewed said the inclusion of stakeholders has been an improvement in the self-evaluation process. For example, officials in Texas told us that working with stakeholders has helped them better understand the severity of particular problems and subsequently has helped position the state to respond to these problems more efficiently. Upon completing the review process, states are required to create detailed improvement plans to address identified deficiencies, which are submitted to Education annually along with the results of their self-reviews through a uniform reporting format. Education implemented this uniform reporting format in recent years to streamline its review process, thereby improving the department’s ability to identify data gaps. Education reviews state-reported data to assess states’ improvement efforts and identify those states most in need of further monitoring and assistance. In recent years, the department has required states to report on those requirements it considers most closely associated with student results, a narrower array of issues than the department previously monitored. These data are focused on performance in five general categories: (1) the provision of educational services in the least restrictive environment, (2) state supervision of IDEA programs, (3) facilitation of parental involvement, (4) student transitions from early childhood programs, and (5) student transitions into post-secondary programs. Education has required states to supply a variety of data for each of these categories. For example, under the state supervision category, states report information regarding the resolution of formal complaints, due process safeguards for students and parents, special education personnel requirements, as well as other supervision data. Officials in 4 of the 5 states we visited said that Education’s narrowed focus has improved the monitoring process by concentrating attention on those areas most likely to affect results for children. Education evaluates the collected data for each state in several ways, including assessing how the measures have changed over time and comparing data for special education students to those for general education students. Education has identified areas of IDEA noncompliance through these screens. For example, based on its data review Education can determine if states have been resolving complaints within IDEA-established guidelines or whether waiting lists have been preventing students from receiving IDEA-guaranteed services. Additionally, according to Education, the department uses selected measures, such as state-reported data on graduation rates, dropout rates and rates of placement in various educational environments to determine which states warrant further monitoring and intervention activities, including onsite visits. States that rank low relative to other states on these measures may be selected. In conducting site visits, Education reviews state records, makes visits to selected districts for on-site examination of student records, and assesses state special education systems, such as complaint systems and student assessment programs. Following these visits, Education issues a report of findings and, when noncompliance is found, requires states to produce a corrective action plan. Education policy tells states to implement a remedy and provide evidence of its effectiveness within 1 year of Education approving the state corrective action plan. As shown in figure 1, Education carried out monitoring visits in 31 different states between 1997 and 2002, visiting between 2 and 8 states per year. Because Education has relied heavily on state-reported data, it suspended its usual monitoring visits in 2003 in order to conduct visits to verify the reliability of state systems for collecting and reporting special education data. After reviewing selected data from all states, Education selected 24 states for onsite examination of their data collection procedures and protocols. Following the data verification visits, the department provided states with technical assistance to address any identified deficiencies. According to Education documents, most of the visited states had data collection systems in place, several of which were of high quality; however, some states needed to better monitor the accuracy of their data and train their data entry personnel. Education officials said that selected states will receive monitoring visits in the fall of 2004. While Education has facilitated improvements in state data collection systems, some of the data are weak. Education has allowed states flexibility in measuring and reporting some performance measures used for site visit selection, such as graduation rates and dropout rates; consequently these data have not been calculated in a uniform manner across states. For example, special education students in Arkansas may receive a standard diploma even if they have not met regular graduation requirements, while special education students in Delaware must meet regular graduation requirements to graduate with a standard diploma. State education officials we talked with said that comparisons of these rates might not be valid because of the wide disparity in how graduation rates are measured and, therefore, should not be used by Education to make judgments about the relative performance of states. Other types of information that Education has used to evaluate state’s performance, such as student transitions and parental involvement, are weak because they have been difficult to gather or because states have been unclear about how to measure them. States have used a variety of methods to report these data; consequently, Education has not compared states’ performance in these areas. Officials in all 5 states we visited noted that student transitions data were particularly difficult to collect because several different agencies were involved in the process and it was often difficult to track students once they left school. Officials in 4 of the 5 states we visited also expressed confusion about how to report parental involvement. For example, officials in one state were unclear about whether they should report the percent of parents notified of meetings or the percent of parents who attended meetings, while officials from another state believed that the measures they used to report parent involvement did not adequately describe parent involvement. Officials in 2 of the 5 states we visited attributed their difficulty in collecting and reporting these measures in part to inadequate guidance from Education, and officials in 3 of the 5 states we visited expressed a desire for greater guidance from the department on how to collect and measure these areas. In our review of Education guidance, we found the direction provided to states in terms of what to measure and report to Education in these areas was vague, as Education does not specify how states should demonstrate performance. For example, Education provides states with 17 potential sources for indicators to measure student transitions into postsecondary programs but does not specify which of these indicators should be reported to Education in annual reports. Education officials with whom we spoke acknowledged difficulties with student transition and parent involvement data and said that they are taking steps to improve data quality. To help address data deficiencies, Education has funded the National Center for Special Education Accountability Monitoring, which assists states, local agencies, and the department in the development of data collection systems. In working with state special education directors, special education advocates, Education officials, and others, the center has found that reliable data sources often do not exist for several of the data elements collected by Education. Our analysis of Education monitoring reports for states visited between 1997 and 2002 showed that failures directly affecting services to children were about as common as failures involving violations of procedural requirements. Education identified a total of 253 noncompliance findings in 30 of the 31 states visited during this period, with an average of approximately 8 findings per state. Our analysis showed that 52 percent of the findings involved state failures to directly ensure that students were receiving required special education services. As shown in table 1, the most common finding of service noncompliance was failure to adequately provide related services intended to assist learning, such as counseling, speech pathology, and assistive technology. Another common deficiency Education cited was failure to adequately outline the activities and training planned to prepare a student for life after exiting school. Of the 12 states that were cited for not having adequate special education or related services personnel, some acknowledged that a personnel shortage had prevented them from always making timely evaluations, which could have resulted in delayed services, late placement decisions, and limited provision of extra help that would be needed to teach special education students in regular education settings. The remaining 48 percent of Education’s findings were for compliance failures that we classified as procedural in nature, that is, activities that did not directly provide or immediately facilitate a service to students. According to our analysis, Education’s most common finding of procedural noncompliance with IDEA was failure to invite some of the appropriate parties to student transition meetings where parents, school personnel, department representatives, and the students themselves determined what educational and vocational training they would need before they left school. Other procedural failures, shown in table 2, often involved the completeness of paperwork or timeliness of meeting other IDEA requirements. For instance, the department found that in several states, notices sent to parents regarding upcoming IEP meetings related to student transition did not include required information such as the purpose of the meeting and the list of who was invited. Similarly, some states did not produce written complaint decisions in a timely manner that outlined how complaints were resolved. When Education has identified noncompliance, it typically has offered technical assistance to states and required them to create corrective action plans; however, states have generally not resolved the noncompliance in a timely manner. Most cases of noncompliance have remained open for several years before closure, and some cases dating back as far as 1997 have not yet been completely resolved. Education’s process for correcting deficiencies consisted of several phases, each of which took a considerable amount of time to complete. For example, on average, 1 year elapsed from Education’s monitoring visit to issuance of its report findings. The department has also made limited use of sanctions to address longstanding issues with noncompliance, but in these cases, too, resolution has been protracted. Further, we found that the 1-year compliance deadlines specified by Education were often missed. State officials commented, and Education officials confirmed, that this standard 1-year timeframe for correction may not, in some cases, provide an adequate period of time in which to implement a remedy and demonstrate its effectiveness. To address noncompliance situations that are expected to take more than 1 year to correct, 3-year compliance agreements may allow states to plan their remedial steps over a longer period. To resolve the deficiencies identified in 30 of the 31 states visited from 1997-2002, Education offered technical assistance to states and required them to develop corrective action plans and submit them to the department for approval. The department assisted states in achieving compliance through informal guidance and, in some cases, follow-up visits to confirm states’ actions. Education officials answered questions regarding policies and best practices as well as referred states to regional resource centers and other technical assistance providers if needed. Also, Education required states to create corrective action plans and submit them to the department for review and approval. The plans were expected to include strategies to remedy deficiencies and demonstrate the effectiveness of the remedy within a year of the approval date of the plan. For example, Maryland was cited for failure to ensure that students with disabilities were educated in regular education settings to the maximum extent possible. To address this violation, one of the steps in the state’s correction plan was to create professional development activities and training materials that emphasized inclusiveness and making appropriate placement determinations. During the 1-year period of correction, states were required to submit periodic updates to document evidence of improvement for Education’s review. Although corrective action plans have a 1-year timeframe for completion according to Education policy, our analysis showed that most cases of noncompliance addressed through this method remained open for years. We found that only 7 of 30 states with findings of noncompliance visited from 1997 to 2002 had completely resolved their deficiencies as of May 2004. Closure of these cases, that is resolution of all deficiencies, took from 2 to 7 years from the time the deficiencies were first identified during a monitoring visit. Of the remaining 23 cases, about half have been unresolved for 5-7 years. Education officials told us that for almost all of these outstanding cases, states have made progress toward correcting the noncompliance, and 11 states are close to completion. Table 3 lists the dates of Education’s monitoring visits, reports, and case closures for the 31 states monitored in the time period 1997 to 2002. In analyzing the time taken to correct noncompliance, we found that the correction process consisted of two phases, each of which frequently took a year or more to complete, as shown in figure 2. The first phase, Education’s issuance of a findings report following its monitoring visit, took a year on average, with a range of 4 to 21 months. Officials in the 3 states we visited that were monitored since 1997 expressed concern about the timeliness of Education’s monitoring reports. Officials from 2 of these states said that the reports contained out-dated information that did not reflect the current environment in the state. In addition, state officials in 1 state said that they delayed the development of their corrective action plans until they received Education’s findings report. Education officials told us that staffing constraints and multiple levels of report review contributed to the delays in issuing reports, but they did not provide a goal of reducing the time needed to issue reports. Second, the time from report issuance to approval of the corrective action plan generally took an additional 1 to 2 years. During this time period, states produced an initial corrective action plan that they revised, if needed, based on review and feedback from Education. Education officials acknowledged that this approval process can be lengthy, but have indicated they are working to reduce the period for corrective action plan approval to 6 months. Although most instances of noncompliance were addressed without more severe actions taken, occasionally Education took measures beyond technical assistance and corrective action plans by imposing sanctions on states. During the 10-year period from 1994 to 2003, Education used three types of sanctions–withholding of funds, special conditions, and compliance agreements. Withholding of all grant funds was attempted once by Education, but the state successfully challenged Education’s action in court and receipt of the grant was not interrupted. The most commonly used sanction was special conditions put on states’ annual grants stipulating that the problem must be resolved within 1 year. During the 1-year period of correction, the states continued to receive funds. In cases of noncompliance requiring longer-time periods to correct, an additional tool available to Education was a compliance agreement, which allowed a state 3 years in which to correct the noncompliance while also continuing to receive funds. Compliance agreements were used only for the Virgin Islands and the District of Columbia. Education officials told us that compliance agreements were used infrequently because they are voluntary and states must agree to the arrangement. States that entered into compliance agreements were also required to undergo a public hearing process to demonstrate that they could not completely address their violations within 1 year. In total, Education has taken enforcement action against 33 states for noncompliance from 1994 to 2003. An action was taken against multiple states for failing to publicly report on the performance of children with disabilities on alternate assessments, as required by the 1997 reauthorization of IDEA. As a result of other compliance issues, Education has imposed 15 sanctions against 11 states in this 10-year period. Appendix II contains more details on enforcement actions taken by Education from 1994 to 2003. Education considers a number of factors in deciding to impose a sanction, including the duration, extent, and severity of the noncompliance, as well as whether a state has made a good faith effort to correct the problem. We found that sanctions were imposed for a variety of specific deficiencies— commonly for failing to provide related services, place students in the least restrictive environment, or have an adequate state system in place for detecting and correcting noncompliance at the local level. In New Jersey, special conditions were imposed to address long-standing noncompliance involving state oversight of local special education programs. New Jersey officials told us that the enforcement action caught the attention of senior state officials and helped the special education department obtain the resources needed to correct the problem within 2 years of the imposition of the sanction. When considering the type of sanctions to impose, Education officials told us that their primary consideration is the expected time of resolution. In cases where officials believe the problem can be addressed in 1 year, special conditions may be used. In cases where resolution is expected to take longer, 3-year compliance agreements may be pursued. In cases involving sanctions, the resolution of compliance issues was often prolonged – generally ranging from 5 to 10 years from the time of problem identification to the imposition of the sanction to closure, as shown in figure 3. In most instances, 4 to 9 years elapsed before Education imposed sanctions, and an additional 1 to 3 years generally passed following the sanction before noncompliance was closed. For example, Massachusetts received special conditions on its grant award in 2000 for noncompliance that was first identified in 1991. Once the special conditions were imposed, Massachusetts remedied the noncompliance in 1 year. Education officials indicated that the reason why several years often elapsed before sanctions were used was that Education preferred to work with states instead of imposing sanctions if they demonstrated good faith efforts to correct deficiencies and followed the steps outlined in their corrective action plans. In addition to those cases that were closed, some ongoing cases have been even more protracted. Although states that receive special conditions attached to their grants are expected to correct problems before the next grant year, in many cases problems were not fully resolved and continued for years. In these cases, states received multiple special conditions for some of the same issues of noncompliance. For example, Pennsylvania received a special condition on its grant for 3 consecutive years beginning in 1998 before achieving compliance on all issues. At the beginning of the 1999 grant year, Pennsylvania had resolved two of the five original issues of noncompliance. Additionally, enforcement actions for California, the District of Columbia, and the U.S. Virgin Islands dating from 1997 and 1998 have not yet been completely resolved. States we reviewed often did not meet the 1-year compliance deadline prescribed by Education, and state officials said that some types of noncompliance could not be corrected within 1 year, a problem that Education officials also acknowledged. Our examination of Education’s records for a sample of 9 states with corrective action plans revealed that none had completely corrected their noncompliance within 1 year of approval of the plan, as required by Education. Likewise, states receiving special conditions on their grant usually did not completely resolve the noncompliance issue within 1 year, and some took numerous years to make the correction. For example, California received special conditions attached to its grant award in 2000 for various deficiencies. The state did not complete the correction of this deficiency within 1 year and as a result received an additional special conditions letter in 2001. Regarding the 1-year deadline, Education officials told us that some states may not be able to correct deficiencies and demonstrate the effectiveness of the changes within the year required of them. In addition, they said that in many cases, a state may take corrective steps within one year but that demonstrating the effectiveness of the remedy may extend beyond 1 year. Officials in 3 states we visited also raised concerns that some types of noncompliance could not be corrected within 1 year. For example, Kansas officials said the state could not demonstrate compliance with a requirement to change an IEP component because IEPs are written year- round and thus every IEP could not be changed within the 1-year deadline. Also, Education officials we interviewed emphasized that some deficiencies take longer to correct than others. They commented that states often could correct certain procedural deficiencies within a year, but entrenched problems, such as personnel shortages, generally take more than 1 year to remedy. In cases of noncompliance that require longer periods of time to correct, Education may pursue 3-year compliance agreements with states that allow the states to continue to receive funds while they are correcting noncompliance. This sanction requires states to establish interim goals and engage in longer-term planning, with specific compliance benchmarks and timelines. States that enter into compliance agreements must demonstrate at a public hearing they cannot achieve compliance within 1 year and that a 3-year time frame for correction is more appropriate. However, this option has been rarely used. One state we visited objected to a compliance agreement Education proposed. The department did not pursue the compliance agreement and, instead, imposed special conditions on the state’s grant approval each year for several years. Officials from this state said that they chose not to enter into a compliance agreement because they considered the additional reporting requirements and monitoring activities it would entail to be too burdensome. Education has taken steps in the right direction since 1997 in focusing its review of state support for children with disabilities on those factors that most affect educational outcomes for disabled students, such as increased parental involvement and placement in regular education settings. In recent years, Education has invested considerable effort to assist states in improving data reliability. Furthermore, by reviewing this information through the use of a uniform reporting format, Education is in a better position to make its local site visits yield improvements where they are most needed. Despite these efforts, some of the information states report about their special education programs are weak and not comparable, which limits Education’s ability to select states for on-site visits that have the most pressing problems. Education made 31 site visits between 1997 and 2002, visiting no more than 8 states in any 1 year. Given such finite opportunities for inspection, it may be easy to miss areas where children are not receiving educational and related services. Aside from targeting the right states for visits, the lengthy resolution process has been a problem in OSEP’s monitoring system. In many instances, noncompliance with the requirements of IDEA has persisted for many years before correction. One reason for the delay is that Education has allowed considerable time to elapse in the initial phases of the correction process; specifically, the time from the first identification of a problem to the imposition of the 1-year time frame for correction. This considerable delay—sometimes taking up to 21 months between problem identification and the issuance of department findings—could result in states postponing the implementation of corrective plans. Although the initial phases of the correction process can be lengthy, Education’s 1-year deadline for states to correct deficiencies is, at times, too short for states to achieve compliance. Unrealistic timeframes may both discourage states from focusing on achievable, albeit longer-term, plans for correction. These unrealistic timeframes may also lessen the impact of the enforcement action itself, as in the case where special conditions are imposed for infractions year after year with few consequences to the state, but potentially detrimental consequences to students with disabilities. The imposition of appropriate deadlines, including the more frequent use of compliance agreements that allow for better long-term planning and predictable consequences when these deadlines are not met, could motivate states to achieve compliance more quickly. The combined effect of such prolonged reviews—lengthy timeframes for the receipt of reports and the approval of corrective action plans—and failure to hold states more firmly to a rapid resolution could directly affect the progress of some of the nation’s most vulnerable children. Without some deliberate and specific improvements to its monitoring process, Education may face difficulties in helping the nation’s disabled students realize their full potential. We recommend that the Secretary of Education develop and provide states with additional guidance for collecting and reporting three measures that Education considers key to positive outcomes for students with disabilities: early childhood transitions, post-secondary transitions, and parental involvement; expedite the resolution of noncompliance by improving response times throughout the monitoring process, particularly in reporting noncompliance findings to states, and track changes in response times under the new monitoring process; impose firm and realistic deadlines for states to remedy findings of noncompliance; and when correction of noncompliance is expected to take more than 1 year, make greater use of Education’s authority to initiate compliance agreement proceedings rather than imposing special conditions on grants. We provided a draft of this report to the Department of Education for review and comment. Education’s written comments are reproduced in appendix III. Recommended technical changes have been incorporated in the text of the report as appropriate. The department discussed, but did not explicitly agree or disagree with two recommendations, disagreed with one recommendation, and did not directly respond to one recommendation, the recommendation regarding imposing firm and realistic deadlines. In response to our recommendation that Education provide states with additional guidance for collecting and reporting data on student transitions and parental involvement, the department was not explicit about its intended actions. While Education agreed with the need to provide states assistance in these areas, it did not clearly indicate whether it would develop the guidance we recommended. Education said that it is funding several centers that are assisting states in collecting data in these areas. We commend Education’s efforts to improve special education performance data. However, to maximize the usefulness of these efforts, the department should formalize the results of these activities in guidance. Therefore, we continue to recommend that Education develop and provide states with guidance on collecting and reporting student transitions and parental involvement data. Regarding our recommendation to improve the department’s response times throughout the monitoring process, Education acknowledged past problems with timeliness but indicated that it had made improvements in recent years. Education stressed that the reports we reviewed were based on its previous monitoring processes, rather than the current process, the Continuous Improvement and Focused Monitoring System (CIFMS). The department said that timeliness had improved in several areas. For instance, Education said that the time required to issue its data verification monitoring reports has been about 4 months and that this is a substantial improvement over the previous system. Education also said that the CIFMS is resulting in the timely receipt of and response to state improvement plans and that the department has a goal to issue responses to all plans by September 30, 2004. However, we could not determine whether, overall, Education’s new CIFMS monitoring process will result in improved timeliness. Education officials told us that the data verification visits primarily focused on accuracy of state data, not detecting noncompliance. Therefore, timeliness associated with these visits may not be an indication of overall improvement. In addition, the timeliness of the focused monitoring visits has not been established since they have not yet begun. We believe that Education’s response times should be improved, but we could not determine the extent to which changes already made might impact timeliness. Therefore, we modified our recommendation to suggest that Education track timeframes associated with various steps in the new monitoring process to substantiate possible improvements. In response to our recommendation that Education make greater use of its authority to initiate compliance agreement proceedings when appropriate, the department said that it cannot independently initiate these proceedings because the compliance agreement process is voluntary on the part of the states. We do not agree with this position. The relevant statute specifically authorizes the department to hold a hearing and directs it to invite certain parties, including the state. While the department cannot compel a state to enter into a compliance agreement, we think initiating proceedings to consider the merits of entering into such an agreement could likely result in beneficial corrective action discussions between the department and the state. It could also result in greater reliance on the 3-year compliance agreement or at least improve corrective action planning by the state. While Education may impose other remedies such as partial or full withholding of funding, issuing a cease and desist order, or referring a state’s noncompliance to the Department of Justice, we believe that in many instances of noncompliance, the 3-year compliance agreement could be the least onerous, and perhaps most helpful, tool to improve state compliance with IDEA. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies of this report to the Secretary of Education and the House and Senate Committees with oversight responsibility for the department. We will also make copies available to other parties upon request. In addition, the report will be available at no charge on GAO's Website at http//:www.gao.gov. Please contact me on (202) 512-7215 if you or your staff have any questions about this report. Major contributors to this report are listed in appendix IV. As requested, our review focused on the Department of Education’s monitoring of the Individuals with Disabilities Education Act (IDEA), Part B, those aspects of the law that regulate the provision of services to disabled school-aged and preschool children. In conducting our review, we examined Education’s monitoring procedures and guidance since Congress last amended IDEA legislation in 1997. Additionally, we examined reports submitted to Education to document compliance, including self-assessments, Section 618 data reports, and improvement plans. We also reviewed compliance and enforcement documentation and Education monitoring reports for the 31 states visited for Part B monitoring since 1997 (see below for more information). Because Education infrequently used sanctions, we examined the previous 10-year period to capture a more comprehensive picture of enforcement actions. Additionally, we conducted site visits to 5 states, where we interviewed state officials and special education experts. We also interviewed Education officials; representatives from the National Council for Disability, an independent federal agency that makes recommendations to the President and Congress on disability-related issues; and representatives from national education organizations. We conducted our work between September 2003 and August 2004 in accordance with generally accepted government auditing standards. We conducted site visits to five states – California, Georgia, Kansas, New Jersey, and Texas. States were selected for variation in the number of special education students served, geographic location, the date of their last monitoring visit, whether they had received a data verification visit, and whether they had been placed under sanctions by Education for IDEA violations since 1993. Additionally, in selecting states, we considered how states ranked on various Education risk factors, including student placement rates, graduation rates, drop-out rates, and level of state complaints. We conducted our site visits between December 2003 and March 2004. While in each state, we analyzed state monitoring documents and met with officials at states’ Departments of Education, including the State Directors of Special Education and members of their staff responsible for monitoring efforts. We interviewed these officials about their experiences with Education’s monitoring processes and gathered information about the systems used by their states to monitor local compliance with IDEA. Additionally, in each state we spoke with members of the state stakeholder committees, which help state officials conduct their self- assessments and create improvement plans. Stakeholders we spoke with included parents of special education students, special education and school administrators, and special education advocates. To determine the nature of noncompliance in those states selected by Education for review, we analyzed the reports issued by Education for the 31 Part B monitoring visits Education made between 1997 and 2002. A 2002 cut-off date was selected because at the time of our analysis, Education had not yet issued a monitoring report for the one state it visited in 2003. To analyze these reports, we reviewed the noncompliance findings cited in these reports and divided the findings into two categories; those relating to infractions that were service-related and those relating to infractions that were procedural in nature. For our analysis, we defined a service compliance issue as an activity that directly provides the student with a basic service required by IDEA or is an activity that will immediately facilitate the provision of a basic service required by IDEA. A procedural compliance issue was defined as an activity that meets a process-oriented requirement of IDEA. While the implementation of these process-oriented requirements might improve the special education program immediately or over time, the activity or process does not directly provide or immediately facilitate a basic service to a student. To determine the results of Education’s efforts to remedy noncompliance, we reviewed Education documents and data pertaining to the 30 states visited between 1997 and 2002 that were cited for noncompliance in Education monitoring reports. Specifically, we analyzed the 30 monitoring reports; and available Education documents such as corrective action plans submitted by states in response to report findings; notification documents from Education approving state plans; state-submitted evidence of change in noncompliance; and, when applicable, notifications to states when noncompliance had been sufficiently addressed. To determine the length of time it took to resolve cases of noncompliance through monitoring visits and technical assistance, we analyzed these documents for dates and deadlines. We computed the length of time for resolution from the date of the monitoring visit until the date Education documented resolution of the problem. To obtain information about Education’s enforcement efforts, we reviewed all cases of enforcement action taken against states by Education from 1994 to 2003. For our review, we viewed enforcement actions as beginning at the time a sanction was first imposed, regardless of how many subsequent times a sanction was used to ultimately bring about compliance. That is, if a state received multiple sanctions for the same infraction, such as several special conditions letters in consecutive years, we viewed all of these individual enforcement actions as one action. Likewise, if a state received one 3-year compliance agreement, while another state received three consecutive special conditions letters for the same infractions, we treated both instances as one enforcement case. For all enforcement cases, we analyzed available Education documents, such as notifications of sanctions, including state grant award letters subject to special conditions and compliance agreements; state-submitted evidence of change to demonstrate compliance; and Education’s correspondence to states notifying them when noncompliance had been sufficiently addressed, thus closing the enforcement cases. Additionally, we examined past monitoring reports to determine when Education first identified noncompliance that ultimately resulted in an enforcement action. In those instances when noncompliance was not identified through a monitoring visit, we used the date of the enforcement action as the date that the noncompliance was first identified for the purposes of our analysis. In all cases, we analyzed documentation for dates and deadlines to determine the length of time it took to resolve cases of noncompliance through sanctions. Not yet resolved; year has not yet expired. Not yet resolved. Timely resolution of complaints. Resolved in 2000. General supervision; identification and correction of deficiencies, IEP violations, provision of related services and least restrictive environment. Resolved in 2002. Ensuring ability to request due process hearings. Resolved in 2003. Eleven areas of noncompliance, including general supervision; due process hearings; timeliness of evaluations and placements; and provision of free appropriate public education in the least restrictive environment. Not yet resolved. Three findings remain open in July 2003: timely implementation of hearing decisions, placement in the least restrictive environment, and timely evaluations. Fiscal management. Not yet resolved; year has not yet expired. Ensuring IEP components are determined at an IEP meeting with all required participants; placement of students in least restrictive environment. Resolved in 2001. General supervision; identification and correction of deficiencies. Resolved in 2001. Fiscal management. Not yet resolved; year has not yet expired. General supervision; identification of deficiencies; placement of students in least restrictive environment, provision of extended school year and speech services. Resolved in 2001. Outstanding issues from 1993 compliance agreement regarding timely evaluations and provision of services. Resolved in 1998. Fiscal management. Not yet resolved. Provision of services to students with disabilities who were expelled or suspended long-term. 1994–Attempt to withhold funds; action contingent on outcome of court case. 1995–Attempt to withhold funds; action contingent on outcome of ongoing court case. 1996–Attempt to withhold funds; action contingent on outcome of ongoing court case. Education was ultimately unsuccessful in court and funds were not withheld. However, subsequent changes in IDEA rendered the disputed issue moot, as all states were required by statute to provide services to disciplined students. Fiscal and program management; general supervision; qualified personnel; placement of students in least restrictive environment; provision of transportation services. Not yet resolved. Participation and reporting on alternate assessments. 2002–Special conditions imposed against 27 states. 2003–Special conditions imposed against 11 unresolved states, plus 1 additional state, Ky. Sixteen of 27 states resolved in 2003; 11 remaining states + Ky. not yet resolved. Alaska, Bureau of Indian Affairs, Colo., Commonwealth of the Northern Mariana Islands, Del., D.C., Guam, Ky., Maine, Mich., P.R., and Utah. The following people also made important contributions to this report: Ellen Soltow, Summer Pachman, Behn Kelly, Susan Bernstein, and Walter Vance. Special Education: Additional Assistance and Better Coordination Needed among Education Offices to Help States Meet the NCLBA Teacher Requirements. GAO-04-659. Washington, D.C.: July 15, 2004. Special Education: Clearer Guidance Would Enhance Implementation of Federal Disciplinary Provisions. GAO-03-550. Washington, D.C.: May 20, 2003. Special Education: Numbers of Formal Disputes Are Generally Low and States Using Mediation and Other Strategies to Resolve Conflicts. GAO-03-897. Washington, D.C.: September 9, 2003. School Dropouts: Education Could Play a Stronger Role in Identifying and Disseminating Promising Prevention Strategies. GAO-02-240. Washington, D.C.: February 1, 2002. Student Discipline: Individuals With Disabilities Education Act. GAO-01-210. Washington, D.C.: January 25, 2001.
The Individuals with Disabilities Education Act (IDEA) ensures the education of the nation's disabled children. As a condition of receiving IDEA funds, states must provide educational and related services that facilitate learning to students with disabilities based on their individual needs. The Department of Education (Education) is responsible for ensuring state compliance with the law. In recent years, questions have been raised about Education's oversight of IDEA. GAO agreed to determine how Education monitors state compliance with IDEA for children aged 3-21, the extent and nature of noncompliance found, and how Education has ensured that noncompliance is resolved once identified. GAO analyzed Education monitoring documents, interviewed state and federal officials, and visited 5 state special education offices. To monitor compliance with IDEA provisions that affect children aged 3-21, Education annually reviews special education data submitted by all states and uses a risk-based approach to identify those states in need of further inspection. This monitoring system relies upon collaboration with states, as each state is responsible for assessing and reporting its performance on the provision of special education services. However, some of the data used by Education, such as information about how parents are included in their children's education and students' experiences after they leave school, are weak in that they are not uniformly measured or are difficult for states to collect. In states Education visited for further inspection from 1997-2002, the department identified roughly equal amounts of noncompliance for failing to adequately provide services to students as noncompliance for not adhering to IDEA's procedural regulations, according to GAO analysis. Education found a total of 253 compliance failures in 30 of the 31 states visited during this period, with an average of approximately 8 across the 30 states. GAO found 52 percent of compliance failures to be directly related to providing student services, for instance counseling and speech therapy. The remaining 48 percent involved a failure to meet certain IDEA procedural requirements. Once deficiencies were identified, Education has sought resolution by providing states with technical assistance and requiring them to develop corrective action plans that would ensure compliance within 1 year. However, GAO found that most cases of noncompliance had remained open for 2 to 7 years before closure, and some cases still remain open. GAO's examination of Education documents showed that a considerable amount of time elapsed in each phase of the correction process, including Education's issuance of noncompliance findings and approval of correction plans. On occasion, Education has also made use of sanctions to address longstanding issues with noncompliance, but in these cases, too, resolution has been protracted. States expressed concerns about the standard 1-year timeframe Education imposes for correction, and Education officials acknowledged that it is sometimes not feasible for states to remedy noncompliance and demonstrate effectiveness in that length of time.
The FEHBP is the largest employer-sponsored health insurance program in the country. In 2012, it provided $42.6 billion in health care benefits to about 8 million individuals. OPM contracts with carriers to provide this coverage. Carriers offer plans in which eligible individuals may enroll to receive health care coverage. For the 2012 plan year, FEHBP options included 10 fee-for-service plans that were available nationwide, 4 plans available only to employees of certain federal agencies (e.g., the Foreign Service), 164 plans offered by health maintenance organizations that were available in certain regions (but not the entire country), 15 high- Most enrollees could deductible plans, and 13 consumer-driven plans.choose from about 6 to 15 plans. The majority of FEHBP policyholders— more than 60 percent—were in plans offered by the Blue Cross and Blue Shield Association. The next largest carriers or groups of carriers in terms of FEHBP enrollment were GEHA and Kaiser Permanente, each with between 5 percent and 10 percent of FEHBP policyholders. Through their contributions toward premiums, the federal government and enrollees bear a portion of the cost of FEHBP fraud and abuse programs. Generally, as set by statute, the government pays 72 percent of the average premium across all FEHBP plans, but no more than 75 percent of any particular plan’s premium. Enrollees pay the balance. Premiums are intended to cover enrollees’ health care costs, plans’ administrative expenses (including expenses associated with fraud and abuse programs), reserve accounts specified by law, plan profits, and OPM’s administrative costs. OPM negotiates plan premiums with carriers and establishes premiums in one of two ways: Experience-rated carriers set their premiums based on their experience, that is, their actual costs of providing health care services and the costs of administrative services. Experience-rated carriers may offer fee-for-service plans or they may be local health maintenance organizations. Of the $42.6 billion in expenses incurred by the FEHBP in 2012, the majority—84 percent—was for the benefits and administrative expenses of experience-rated carriers. Community-rated carriers are generally health maintenance organizations that set their FEHBP premiums based on a documented methodology that is applied to other groups of insured individuals in the same geographic community. These carriers receive fixed payments—the premiums—for each enrollee, rather than receiving payments for services rendered. Administrative costs are included in the payment rate. In fiscal year 2012, the FEHBP paid $6.7 billion to community-rated carriers. Carriers’ costs for fraud and abuse programs are included with other administrative costs (for experience-rated carriers) or within the fixed payments (for community-rated carriers). As a result, the amounts that carriers spend to prevent, detect, or correct fraud and abuse are not clearly identifiable. OPM is required to administer contingency reserve funds for FEHBP carriers, which can help avoid major fluctuations in the FEHBP premiums from year to year. OPM administers a contingency reserve fund in the U.S. Treasury for each FEHBP plan, and unexpended contingency reserves are carried forward. Experience-rated carriers may draw upon their individual contingency reserve funds if claims are larger than anticipated, or, if the balance is large enough, to avoid or reduce a premium increase for the following year. For community-rated carriers, OPM may negotiate an adjustment to the plan’s rates under certain circumstances and use the contingency funds to pay for the adjustment. For example, if the community rate changes between the time the carrier estimated its rates (generally in spring) and the time that coverage through the plan became effective (the following January), OPM negotiates an adjustment. OPM can adjust carriers’ profits based on performance, including the performance of fraud and abuse programs, using mechanisms that differ by the type of plan. There is no minimum profit. For experience-rated carriers, OPM negotiates the plan’s profit rate by determining a service charge using a process outlined in regulation that takes the plan’s The service charge (or profit) for performance into consideration. experience-rated carriers may not exceed 1.1 percent of the plan’s projected claims and administrative fees. For community-rated carriers, profits reflect the difference between the premiums and the actual costs. Because they are not capped at a percentage of projected costs, profits for community-rated carriers may be greater than profits for experience- rated carriers. Regulations specify a process for OPM to consider a community-rated plan’s performance and assess a penalty of up to 1 percent of the total premium payment. These penalties, which reduce the carrier’s profits, may be assessed for noncompliance with contract requirements, including fraud and abuse program requirements. (Service charges apply only to experience-rated plans, while penalties apply only to community-rated ones.) Fraud and abuse in FEHBP plans affect the government, enrollees, and carriers because fraud and abuse can add to premium costs, reduce program reserves, or both. incentive to minimize fraud and abuse because raising premiums may make their plans less appealing to potential enrollees than plans offered by FEHBP carriers that have less fraud and abuse. Community-rated carriers, which receive fixed payments for each enrollee rather than payments for services rendered, have an additional financial incentive to establish effective fraud and abuse programs because they can keep any savings above and beyond the cost of establishing and maintaining the programs. In general, however, carriers have an All carriers are susceptible to fraud and abuse, although the specific vulnerabilities vary by the type of carrier. Experience-rated carriers that offer fee-for-service plans are at particular risk for forms of fraud or abuse associated with excess payments for care, for example, through billing for services that are not medically necessary. Community-rated carriers, which receive fixed payments for each enrollee rather than payments for services rendered, are at particular risk for forms of fraud or abuse that reduce the costs of providing care, for example, through inappropriate dilution of medications. If a carrier’s reserves are insufficient to cover its costs, including costs associated with fraud or abuse, the carrier must fund its losses. According to OPM officials, losses due to fraud and abuse would have to be substantial relative to legitimate costs for a carrier’s reserves to prove insufficient. Two offices within OPM have key responsibilities involving fraud and abuse within the FEHBP. The Healthcare & Insurance—Federal Employee Insurance Operations office, the contracting office, is responsible for administering the FEHBP, contracting with carriers, and overseeing carriers’ compliance. Oversight of carriers’ compliance with requirements and guidance related to fraud and abuse is part of the broader responsibility to ensure compliance. According to OPM officials, 7 contract officers, 16 contract specialists, and 7 audit resolution staff within the contracting office have general and discrete oversight responsibilities, in addition to staff in the office’s supporting branches (such as those providing program analyses and systems support). The OIG has responsibilities that involve two aspects of FEHBP fraud and abuse efforts. First, as a law enforcement agency, the OIG’s Office of Investigations may investigate potential fraud and abuse within the FEHBP. (App. I provides information on carrier and OIG fraud and abuse reporting requirements.) Second, as an oversight entity, the OIG’s Office of Audits conducts audits of FEHBP carriers. OIG officials told us that until recently the OIG’s audits of FEHBP carriers generally focused on audits of carriers’ claims and payments and not on the extent to which the plans comply with fraud and abuse program requirements. The OIG has begun including an in-depth examination of a carrier’s fraud and abuse program in some of its audits. These audits have included reviews of policies and procedures and reviews of files to determine whether potential fraud and abuse cases were reported as required. After conducting in-depth audits of the fraud and abuse programs of three of the larger FEHBP experience-rated carriers, the OIG questioned their effectiveness, in part because the programs’ outcomes, in terms of the prosecution of fraud cases and recovery of defrauded funds, were minimal. The OIG's findings from these audits included instances of failure to provide required notice of potential fraud or abuse, failure to report the amount of all recoveries of defrauded funds, and failure to include all relevant expenses when reporting the cost of anti-fraud activities. OPM requires FEHBP carriers to establish programs to prevent, detect, and eliminate fraud and abuse. FEHBP contracts contain minimum requirements for fraud and abuse programs; according to officials from OPM’s contracting office, these requirements accommodate differences in carrier characteristics and so allow flexibility in fraud and abuse program implementation. Each carrier is required by contract to: conduct a program to assess vulnerability to fraud and abuse; operate a system designed to detect, eliminate, and follow up on fraud submit a report on fraud and abuse by March 31 of each year; demonstrate that a statistically valid sampling technique is used routinely to compare FEHBP claims against the carrier’s quality assurance standards and its fraud and abuse prevention standards; maintain records of fraud prevention activities; implement any corrective actions ordered by an OPM contracting officer to correct a deficiency in its fraud prevention program; provide timely notification to the OIG of credible evidence of a violation of federal criminal law involving fraud found in Title 18 of the U. S. Code by a principal, employee, agent, or subcontractor; and provide timely notification to the contracting officer of any significant event, including fraud, that might reasonably be expected to have a material effect on the carrier’s ability to meet its obligations. OPM also uses letters to carriers to issue requirements and guidance. For example, one carrier letter imposes requirements for reporting potential fraud and abuse when a carrier has a reasonable suspicion that fraud against the FEHBP has occurred or is occurring. (As indicated above, app. I provides more information on carrier and OIG fraud and abuse reporting requirements.) Another carrier letter presented guidance on certain nonrequired standards for fraud and abuse programs. Specifically, OPM identified a set of eight industry standards for fraud and abuse programs (see text box), and in 2003, it issued a letter to carriers indicating that it would like carriers to implement these standards. Federal Acquisition Regulations require that each FEHB carrier must perform the contract in accordance with prudent business practices, which include timely compliance with OPM instructions and directives. 48 C.F.R. § 1609.7001(b)(1), (c)(4). Therefore, carriers must comply with requirements contained in carrier letters. OPM’s contracting office staff conduct several activities to monitor carriers’ compliance with fraud and abuse program requirements and agency guidance, including reviewing carriers’ annual reports to OPM’s contracting office, conducting site visits, reviewing and resolving OIG audit findings, and reviewing disputed claims and enrollee complaints. Contract officers use the information from these efforts to oversee carriers and to determine carriers’ service charges and penalties. Officials from OPM’s contracting office told us that contracting office staff assess carriers’ compliance with fraud and abuse program requirements and guidance, and monitor carriers’ performance by reviewing annual reports from carriers. In one routinely submitted report, the carrier describes its fraud and abuse program, including operational information; organizational structure; certain budget and cost allocation information; and performance indicators, such as how the carrier measures the performance of its antifraud efforts. Officials from OPM’s contracting office told us that they review this report and assess the information contained in the report against fraud and abuse program requirements to determine the extent to which carriers met, and were thus in compliance with, requirements. For example, OPM contracting office staff assess the reported information, such as the criteria the carrier uses for notifying the OIG of a potential fraud case, against OPM’s requirements for reporting cases of potential fraud to the OIG. A second report, specifically required by contract, provides additional information about the carrier’s fraud and abuse program and the carrier’s fraud and abuse activities and outcomes involving the FEHBP during the year. The report contains a checklist showing which of the nonrequired fraud and abuse industry standards the carrier and any subcontractors implemented. Officials from OPM’s contracting office told us that they assess this information against the fraud and abuse program guidance to determine the extent to which carriers implemented the recommended standards and to follow up with those that have not implemented them. For example, according to officials from OPM’s contracting office, a contracting officer contacts a carrier whose report indicates that it did not implement one of the components of a fraud and abuse program, such as having an antifraud policy statement. In following up with the carrier, the contracting officer indicates that OPM expects the carrier to implement the component and may conduct a site visit, request an OIG audit, or meet with the carrier to review evidence to confirm that the carrier has come into compliance with OPM’s expectation. Our review of a summary of carriers’ reports containing their responses to the industry standards checklist for 2012 indicated that most carriers submitted the report as required and their responses indicated compliance with fraud and abuse program guidance. Officials from OPM’s contracting office told us that, as part of their oversight, contracting officers follow up with carriers whose reports suggest possible noncompliance for the purpose of bringing them into compliance. However, OPM contracting office staff did not follow up with carriers that had not submitted their reports or whose reports indicated program deficiencies until July 2013, after we inquired about these carriers’ reports. Specifically, although 5 carriers did not submit reports by March 31, as required by contract, OPM contracting office staff did not begin following up with 4 of these carriers until July 2013, in response to our inquiry about OPM’s follow-up actions to obtain these reports. Based on 2011 enrollment data, we estimate that these carriers together accounted for about 0.1 percent of FEHBP enrollees. Most carriers submitted timely reports indicating that either they or a subcontractor had implemented the recommended, nonrequired industry standards for fraud and abuse programs. However, 7 carriers submitted reports indicating that neither they nor a subcontractor had implemented one or more of those standards. For example, 2 of the 7 carriers indicated that neither they nor a subcontractor had implemented a strategy for educating enrollees about fraudulent and abusive practices and 1 of these carriers had not published an antifraud policy statement or conducted formal fraud awareness training with all its employees. Based on 2011 enrollment data, we estimate that these 7 carriers together accounted for 0.8 percent of FEHBP enrollees. OPM contracting staff did not begin following up with these carriers until July 2013, after our inquiry, and as of August 2013 were still following up with 1 of these carriers. OPM contracting office staff also assess carriers’ compliance with fraud and abuse program requirements and guidance during periodic site visits. In contrast to the reviews of annual reports, which are performed remotely, site visits provide an opportunity for contracting office staff to collect, inspect, and follow up on fraud and abuse program information on-site. During site visits, OPM contracting office staff review carriers’ fraud and abuse program documents and information systems, conduct face-to-face meetings with carrier staff, and evaluate the extent to which the carrier’s program meets fraud and abuse program requirements and guidance. For example, during a site visit, OPM contracting office staff may review the carrier’s program and system for fraud prevention and detection, staffing, fraud awareness training, and examples of fraud and abuse program activities. As a result of their review, contracting office staff may recommend areas for improvement or note best practices. OPM contracting office staff conducted site visits that covered 96 carriers’ plans from 2008 through 2012 using a risk-based site selection strategy that included carrier type, enrollment, and special circumstances. Officials from OPM’s contracting office told us that although contracting officers select experience-rated carriers for site visits every 3 to 5 years, they may also select any experience-rated carrier for a site visit if the carrier experiences consistent or urgent problems. According to officials from OPM’s contracting office, in 2012, the 20 experience-rated carriers selected for site visits accounted for 69 percent of FEHBP enrollment.addition, the officials told us that OPM contracting office staff select community-rated carriers, which generally have smaller enrollments, for In site visits at their discretion and as OPM resources allow.7 community-rated carriers selected for site visits accounted for 0.55 percent of FEHBP enrollment, according to OPM contracting office officials. OPM contracting office staff review and resolve OIG audit findings as part of their oversight of carriers’ fraud and abuse programs. In comparison to reviews of annual reports of carriers’ self-reported compliance by OPM contracting office staff, OIG audit findings identify areas of noncompliance through an independent, on-site evaluation. Although site visits by both OPM contracting office staff and the OIG assess carrier compliance, only OIG audits assess the extent to which requirements and guidance have been implemented as the agency intended, according to OPM contracting office and OIG officials. OIG audits may also result in recommendations to OPM contracting officers to oversee carriers’ implementation of corrective actions intended to bring carriers into compliance. OPM contracting office staff provide input to the OIG on planned audits as part of the OIG’s risk-based audit selection strategy and may also request that the OIG conduct a special audit on an area of concern. For example, OPM contracting staff asked the OIG to assess one carrier’s internal controls for preventing and detecting illegal practices and made a request for a special audit to ensure one carrier’s compliance with contractual requirements. In June 2013, OIG officials told us that they had conducted eight audits that included findings related specifically to a carrier’s compliance with fraud and abuse program requirements. OPM contracting office staff address audit recommendations by overseeing carriers’ implementation of corrective action(s) in response to audit findings. To do so, contracting office staff review OIG audit findings and carriers’ responses to audit findings, including corrective actions and documentation supporting their implementation. For example, in one audit we reviewed, the OIG recommended that an OPM contracting officer verify that a carrier implements current policies and procedures regarding communication of information about potential fraud and abuse and develops and implements criteria for follow-up actions on reported cases of potential fraud or abuse. Officials from OPM’s contracting office told us that the carrier provided extensive documentation of its corrective actions in response to audit findings. The officials also reported that OPM contracting office staff are working with other carriers to address findings from recent audits and close the resulting recommendations. In addition to oversight of individual carriers, officials from OPM’s contracting office told us that audit findings and recommendations help them identify fraud and abuse program areas to focus on more broadly. For example, OPM contracting office staff identified carriers’ sharing of information about potential fraud and abuse as an area of concern after audit findings indicated that certain carriers were not communicating information about their fraud and abuse program activities as required. As a result, officials from OPM’s contracting office told us that they are reviewing documents from the OIG related to reporting and are in the process of working to determine whether the current reporting requirements are sufficient. As of August 2013, officials did not have a timeline for completion of this activity. OPM contracting office staff review disputed claims and enrollee complaints to identify indicators of potential fraud or abuse, among other things. Officials told us that contracting officers’ reviews of disputed claims may reveal suspicious patterns of drug utilization, multiple complaints involving a single provider, or other indicators of potential In addition to reviews of enrollees’ disputed claims, fraud or abuse.OPM contracting office staff review enrollees’ complaints about other aspects of the FEHBP, which could also indicate potential fraud or abuse, and they intervene as necessary. For example, in response to an FEHBP enrollee’s complaint regarding a carrier’s request for sensitive information, OPM contracting office staff examined and confirmed the legitimacy of the request. OPM’s contracting office staff use information from their monitoring activities—including their reviews of carriers’ annual reports, site visits, reviews of OIG audit findings, and reviews of disputed claims and enrollee complaints—when they determine carriers’ service charges or penalties. Officials from OPM’s contracting office told us that service charges or penalties may be adjusted to reflect noncompliance with fraud and abuse program requirements. For experience-rated carriers, up to 45 percent of the service charge is based on contractor performance, which includes failure to comply with contractual requirements. For community-rated carriers, 30 percent of the penalty determination is based on compliance with contractual requirements, with about half of that based on the timeliness of report submissions, including submissions of fraud and abuse reports. Officials from OPM’s contracting office provided us with several examples of their use of information from their monitoring activities when determining service charges: OPM contracting office staff used an experience-rated carrier’s failure to meet site visit scheduling, goals, and turn-around time as a negative factor and the carrier’s reduction in the total number of disputed claims submitted to OPM as a positive factor when determining the carrier’s 2012 service charge. OPM contracting office staff used the OIG’s audit findings that a carrier was not in compliance with fraud and abuse reporting requirements as a negative factor when determining the service charge for that carrier in 2013. OPM contracting office staff review certain outcomes of carriers’ fraud and abuse programs, but program outcomes do not provide complete information about program effectiveness. Instead, the program outcomes that OPM contracting office staff review provide partial information about carriers’ fraud and abuse programs and may be useful for reporting program accomplishments. For example, in 2011, carriers reported that the outcomes of their fraud and abuse programs included 29 criminal convictions and more than $23 million in recoveries to FEHBP. However, program outcomes do not measure the success of strategies intended to prevent or minimize fraud and abuse such as systems for preauthorization or precertification. OPM officials reported that although they are concerned about program outcomes that reflect the past, they place emphasis on ensuring that carriers have preventive antifraud strategies, such as prepayment controls (including system edits, preauthorizations, and precertifications), in place to prevent future occurrences of fraud and abuse. OPM contracting office staff also collect information about how carriers assess their antifraud activities, in part to determine whether carriers routinely assess their own antifraud programs. OPM contracting office staff reported that they have not adopted specific measures of program effectiveness for FEHBP fraud and abuse programs because they have not identified an appropriate way to measure the effectiveness of antifraud programs. Several factors contribute to difficulties in developing a measure of the effectiveness for health care antifraud programs. These factors include the following: A lack of information about the baseline amount of fraud and abuse in health care. We have previously reported that there is no reliable baseline estimate of the amount of health care fraud in the United States. Similarly, officials from OPM’s contracting office told us that they cannot estimate the amount of fraud within FEHBP. A baseline estimate could provide an understanding of the extent of fraud and, with additional information on fraud and abuse program activities, could help to clarify the effectiveness of antifraud program activities. The difficulty of establishing a causal link between antifraud activities and the amount of health care fraud or abuse. Although the efforts of federal agencies and nongovernment entities may help to reduce health care fraud, it is difficult to isolate the effect of any individual action or to clearly establish that any change in the amount of fraud was due to any specific cause. Thus, for example, a carrier’s implementation of a specific fraud detection strategy may deter certain types of fraud, but a reduction in those types of fraud could also have been due to other causes, such as changes in laws or in prosecutorial practice. The difficulty of measuring the effect of efforts to prevent or deter fraud and abuse. Officials from OPM’s contracting office reported that they believe that FEHBP fraud and abuse program requirements and their oversight of these programs have helped to limit the amount of fraud in FEHBP. Officials and industry experts said, however, that it is difficult to measure how much fraud or abuse is deterred or prevented. For example, FEHBP carriers may place restrictions on their provider networks to prevent individuals who are intent on fraud from enrolling as providers, but it is difficult to measure the amount of fraud or abuse that was prevented as a result of such restrictions. Despite the challenges involved in assessing the effectiveness of antifraud activities, OPM officials acknowledged the importance of ensuring the prevention, detection, and correction of fraud and abuse within the FEHBP. Others in the health care sector also acknowledge the importance of measuring the effectiveness of antifraud activities and are working to develop appropriate measures. For example, CMS recently began an effort to estimate probable fraud involving specific home health care services in Medicare that could provide information on the extent of fraud that currently exists and, in coming years, how it has changed over time. Establishing a baseline may make it feasible to study how antifraud activities affect the level of home health care fraud. In addition, OPM told us that they are continuing to monitor public and private sector efforts to develop measures of the effectiveness of antifraud activities. If reliable and valid measures of the effectiveness of antifraud programs are identified, it will be important for OPM to determine whether those measures are appropriate for the FEHBP. OPM and the OPM OIG reviewed a draft of this report and provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Acting Director of OPM, appropriate congressional committees, and other interested parties. The report also will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. There are requirements and procedures for reporting potential fraud and abuse that apply to both the carriers that offer health care plans through the Federal Employees Health Benefits Program (FEHBP) and the Office of Personnel Management (OPM). These requirements and procedures include communication between the carriers and OPM as well as communication with law enforcement agencies and others, such as Congress. Carriers are required to report to both OPM’s Office of the Inspector General (OIG) and OPM’s Healthcare & Insurance—Federal Employee Insurance Operations office, which we refer to as the contracting office. Carriers are required to report potential fraud and abuse to the OIG. According to a letter OPM sent to carriers in 2011, carriers are required to provide written notice to the OIG within 30 working days of becoming aware of possible fraud or abuse. Carriers are to notify the OIG regardless of the amount of money involved and without waiting to determine whether there is sufficient evidence to substantiate the allegation. The notice is generally to include information about the identity of the suspected health care provider(s) or enrollee(s) and a brief description of the allegation, among other things. The notice may request that the OIG monitor a case being developed in preparation for a pending referral, or it may request that the OIG decline the case so that the carrier may participate in a class action lawsuit. According to OIG officials, the notice may also include a referral to the OIG (i.e., a request that the OIG evaluate the case). As shown in figure 1, the carrier’s reporting requirements after providing initial notice of potential fraud or abuse depend on the OIG’s response: If the OIG decided to monitor the case and asked the carrier to continue its investigation, then the carrier is to provide the OIG with written status updates when it has additional information to substantiate or refute the allegation. If the OIG declined the case, the carrier may proceed with its investigation without further contact with the OIG unless (1) the carrier develops significant new information and believes that the OIG should reconsider; (2) the case is accepted for investigation by one or more other federal, state, or local law enforcement agencies; or (3) the case is accepted for prosecution at the federal level, such as by a U.S. Attorney’s Office or by the Department of Justice. If any of these events occurs, the carrier is to provide the OIG with a written status update. If the OIG requested a referral, the carrier is to submit the referral within 120 days or to provide monthly status updates starting on day 121. Upon request, all carriers must furnish the OIG with FEHBP claims information and supporting documentation relevant to open criminal, civil, or administrative investigations and, absent extenuating circumstances, must do so within 30 calendar days. The carrier may refer the case to the OIG at any time and does not need a request from the OIG to do so. Carriers are to submit status updates, which summarize new information, to the OIG when: the carrier develops significant new information that the carrier believes would aid the OIG in determining whether to request a referral or decline the case; the carrier determines that the allegation had no merit or that no false or fraudulent activity took place as alleged; the carrier closes its inquiry; the carrier wishes to proceed with administrative debt collection, recovery, or settlement of an FEHBP overpayment; or the OIG requests a status update. When the carrier refers the case to the OIG (whether the referral was solicited by the OIG or not), the referral is to be in writing and to include (among other things) information about the identity of the suspected health care provider(s) or enrollee(s); a comprehensive description of the suspected fraud or abuse; and copies of any analyses, documents, or other information the carrier has that is relevant to the allegation. Each carrier files two annual reports with the OPM contracting office; these reports provide information about the carrier’s fraud and abuse program and summarize relevant activities. In one report, the carrier is to respond to a questionnaire that OPM’s contracting office staff use to obtain information about the carrier’s procedures for reporting potential fraud, including its criteria for notifying law enforcement agencies and the OIG of potential fraud and how it manages referrals from hotlines, law enforcement agencies, and others. (This annual report also covers other aspects of the carrier’s fraud and abuse program—its organization, certain budget information, performance indicators, and other operational information.) A second report provides additional information about the fraud and abuse program and covers the carrier’s fraud and abuse activities and outcomes involving the FEHBP during the year—the number of cases it opened, dollars recovered, number of criminal convictions, and so forth. OPM is required to provide information to the carriers that notified it of potential fraud or abuse and, depending on the case, may be required to report information to other law enforcement entities. In addition, OPM may share information about fraud or abuse with other carriers and antifraud entities. When the OIG receives a carrier’s notification of potential fraud or abuse, or when the OIG receives a status update about an instance of potential fraud and abuse from a carrier, the OIG is to respond in writing within 30 calendar days to inform the carrier of its level of interest in the case. Specifically, the OIG may (1) monitor the case, asking the carrier to continue investigating and provide status updates as appropriate, (2) decline the case, or (3) request a referral. (The OIG does not need a referral from a carrier to pursue a case; it has the authority to pursue any allegations of fraud or abuse that involve the FEHBP.) According to OIG officials, when reaching a decision about its response to the carrier, the OIG weighs information about patient safety; the type of fraud that is potentially at issue (e.g., whether it seems to involve a pattern of potentially fraudulent activity or not and whether it seems to be local or is potentially widespread); the evidence in support of the allegation; the dollar value of the alleged fraud or abuse; the resources that would likely be required to pursue the case; and whether a prosecutor is likely to pursue the case. Depending on the case, there may be requirements or procedures for the OIG to report information about potential fraud or abuse to other law enforcement entities, including the Federal Bureau of Investigation (FBI), U.S. Attorneys, or state or local prosecutors. According to the OIG’s investigative manual, preliminary evaluation of information regarding potential health care fraud involves determining whether there is enough risk of patient harm or enough risk of financial exposure to continue investigating the allegation. The OIG may also initiate a preliminary inquiry proactively, for example, when its review of information about a program yields information that suggests potential fraud or abuse. If a preliminary inquiry indicates credible evidence that a criminal, civil, or administrative violation may have occurred, the OIG decides whether to initiate an investigation, refer the allegation to another law enforcement agency, or seek to conduct a joint investigation with another law enforcement agency. The OIG may refer the allegation to another law enforcement agency when (1) the subject matter is by law investigated by another agency; (2) the allegation does not involve OPM employees, contractors, programs, or property; (3) the allegation involves OPM indirectly, while having a major impact on another agency; or (4) the allegation involves a According to the OIG’s threat to the safety of a high government official.investigative manual, these referrals are to include a presentation of the complaint or allegation and any facts developed by the OIG. Reporting to the FBI. According to the OIG’s investigative manual, the OIG is to refer a case to the FBI when appropriate. In addition, if the OIG determines that there is sufficiently credible evidence to convert a preliminary inquiry into a criminal investigation, guidance from the Office of the Attorney General specifies that the OIG is to notify the FBI within 30 days, absent exigent circumstances. According to OIG officials, when reaching a decision about whether to convert a preliminary inquiry into a criminal investigation, the OIG weighs patient safety, the extent of likely FEHBP exposure, the extent to which the allegations appear to be supportable, and the likelihood of prosecution (either criminal or civil). Reporting to U.S. Attorneys. According to the OIG’s investigative manual, if the OIG determines that a case should be referred to a U.S. Attorney’s Office, a formal presentation normally occurs after the OIG has completed its investigation. The OIG expects its agents to establish and maintain working relationships with U.S. Attorneys’ Offices, and expects them to consult as soon as they have information that indicates that an investigation may corroborate an allegation. According to the OIG, early consultation allows the OIG to focus its investigative efforts on and support prosecutive potential. If early consultation results in acceptance of a case for prosecution, the OIG is to provide the prosecutor with a preliminary investigation report. Reporting to state or local prosecutors. If a case is declined by U.S. Attorneys, the OIG may refer the case to state or local prosecutors. Under the Inspector General Act of 1978, as amended, the OIG is required to report its activities and accomplishments (including those related to fraud and abuse) to Congress semiannually, and the agency is required to submit a response to each semiannual report. The OIG may also prepare reports in response to requests from other federal agencies (including GAO) or from congressional committees or members. The OIG may notify multiple carriers of suspected fraud or abuse and may share information about potential fraud and abuse through participation in interagency health care fraud task forces or other fraud detection and prevention organizations. Consistent with the Health Insurance Portability and Accountability Act, which encourages coordination and information sharing between federal, state, and local law enforcement programs to control health care fraud and the sharing of related information with the private sector,shares information with several antifraud entities: The OIG participates as a law enforcement liaison to the National Health Care Anti-Fraud Association (NHCAA). According to OIG officials, the OIG works with the NHCAA and its members (which include many FEHBP carriers) on training, education, and sharing information about trends in health care fraud and participates in NHCAA-sponsored training and conferences, and the OIG liaison to the NHCAA participates on NHCAA committees and meets with the NHCAA on a regular basis. The OIG organizes and operates an FEHBP Carrier Task Force that includes the OIG and representatives of the largest FEHBP carriers. According to OIG officials, this task force meets on a quarterly basis to share information about cases and fraud trends and to discuss emerging fraud-related issues. In addition to the contact named above, key contributors to this report were Kristi Peterson, Assistant Director; Kristen Joan Anderson; George Bogart; and Jennel Lockley.
The FEHBP provides health care coverage to millions of federal employees, retirees, and their dependents through health insurance carriers that contract with OPM. Carriers offer plans in which eligible individuals may enroll to receive health care benefits. OPM negotiates these contracts; requires that each carrier establish a program to prevent, detect, and eliminate fraud and abuse; and oversees carriers' fraud and abuse programs. Although the extent of fraud and abuse in the FEHBP is unknown, any fraud or abuse that does occur contributes to health care costs and may be reflected in the premiums for FEHBP enrollees. GAO was asked to review OPM's oversight of FEHBP fraud and abuse programs. This report describes (1) oversight of fraud and abuse programs by OPM's contracting office and (2) the OPM contracting office's approach to measuring the effectiveness of FEHBP carriers' fraud and abuse programs. To do so, GAO reviewed documents that specify program requirements and guidance, such as carrier contracts and letters from OPM to carriers; documents that assist oversight of fraud and abuse programs, such as annual reports that OPM requires from carriers; and documents demonstrating oversight of carriers, such as memos to carriers from OPM contracting office staff regarding carriers' compliance. GAO also reviewed published work about measuring the effectiveness of antifraud programs. GAO interviewed OPM officials and officials from entities with expertise related to antifraud programs and measurement. The Office of Personnel Management (OPM) Healthcare & Insurance--Federal Employee Insurance Operations office, which we refer to as OPM's contracting office, monitors Federal Employees Health Benefits Program (FEHBP) carriers' compliance with requirements and other guidance for preventing, detecting, and eliminating fraud and abuse. These requirements include establishing a program to assess vulnerability to fraud and abuse, reporting annually on program outcomes, reporting potential fraud to OPM's Office of Inspector General (OIG), and implementing corrective actions to address deficiencies in fraud prevention programs. OPM's guidance encourages carriers to implement certain program standards, such as formal fraud awareness training for all employees. To monitor carriers' compliance with these requirements and other guidance, OPM's contracting office staff conducts the following activities. Review carriers' annual reports: Staff review information contained in annual reports from carriers that describe the carriers' fraud and abuse programs and their outcomes. Officials told us that they assess information in carriers' annual reports against program requirements and guidance and follow up with carriers whose reports suggest possible noncompliance. Conduct site visits: Staff also inspect and follow up on carriers' fraud and abuse programs during periodic site visits. Using a risk based site selection strategy, OPM contracting office staff conducted site visits of 27 carriers whose plans covered about 70 percent of FEHBP enrollees in 2012. Review and resolve OIG audit findings: Staff review and resolve OIG audit findings that identified areas of carriers' noncompliance. Review disputed claims and enrollee complaints: Staff review disputed claims and enrollee complaints to identify indicators of potential fraud or abuse, such as suspicious patterns of drug utilization. OPM contracting office staff review certain outcomes of carriers' fraud and abuse programs, but several factors contribute to the challenge of assessing program effectiveness. Program outcomes in 2011 included 29 criminal convictions and more than $23 million in recoveries to the FEHBP, but program outcomes do not provide complete information about program effectiveness because they do not measure the success of efforts to prevent or minimize fraud and abuse. OPM contracting office staff reported that they have not adopted specific measures of program effectiveness for FEHBP fraud and abuse programs because they have not identified an appropriate way to measure the effectiveness of antifraud programs. Several factors contribute to difficulties in assessing the effectiveness of health care antifraud programs. These factors include lack of information about the baseline amount of fraud and abuse, difficulty establishing a causal link between antifraud activities and the amount of fraud and abuse, and difficulty measuring the effect of efforts to prevent or deter fraud and abuse. OPM and the OPM OIG provided technical comments, which we incorporated as appropriate.
In 2006, the Deputy Secretary of Defense created the Task Force for Business and Stability Operations. Its initial focus was to improve DOD’s contracting processes as a means to increase the number of DOD contracts awarded to Iraqi firms and therefore help to develop businesses and create jobs. Soon thereafter, the Task Force’s scope of operations expanded to include efforts intended to restart Iraqi state-owned factories, attract foreign investment, improve private banking, and revitalize Iraq’s agriculture and energy sectors. For Iraqi state-owned factories, the Task Force procured spare parts, production equipment, and raw materials and provided training to employees. Additionally, the Task Force reported that it established temporary office space to provide accommodation for companies seeking to invest and establish a permanent presence in Iraq. To improve banking in Iraq, the Task Force reported that it helped establish capacity to transfer funds electronically. In July 2009, the Task Force began shifting its focus to Afghanistan at the request of the International Security Assistance Force, U.S. Central Command, and the U.S. Embassy in Kabul. Task Force officials and subject matter experts conducted a 3-month assessment to develop a strategy and plan for activities in Afghanistan. As a result, they identified several areas of the Afghan economy that they believed were viable for investment, such as minerals, indigenous industries, and agriculture. According to Task Force documentation, the Task Force completed a project in December 2010 with the Afghanistan government to rehabilitate an oil well to demonstrate the commercial feasibility of oil production in Afghanistan. It also has several activities ongoing in other areas, such as assisting the Afghan Ministry of Mines with collecting and collating geological data with the U.S. Geological Survey to complete tender packages for investment, building carpet finishing facilities to allow domestically finished carpets to be sold through an international outlet, and planning to construct agricultural colleges at Afghan universities that will serve farmers and agribusiness. In addition, the Task Force has ongoing activities in banking and finance, energy, software industry development, and information and communication technology development in Afghanistan. The Task Force uses a variety of approaches to conduct its work, including arranging visits for U.S. and non-U.S. investors to meet with business leaders and undertaking specific development projects that could involve building facilities or conducting assessments to identify potential opportunities. To implement its projects, the Task Force may use contractors to build facilities or provide assistance to host government ministries or organizations. While the Task Force undertakes some projects by itself, in other cases it works with other organizations, for example USAID, State, or other DOD organizations. In these cases, the Task Force may provide support to other agencies or complete a portion of a project. For example, the Task Force has worked with USAID on the rehabilitation and electrification of a cement plant in Parwan. As of June 2011, the Task Force consists of 51 government employees and 28 subject matter experts from private firms. Since its inception, the Task Force has received funds from a variety of sources, including the Army’s Operations and Maintenance appropriation account, the Iraq Freedom Fund, and the Office of the Secretary of Defense’s Emergency and Extraordinary Expense Fund. In January 2011, Congress passed the NDAA for Fiscal Year 2011, which authorized the Task Force to use up to $150 million of operations and maintenance funds available to the Army for overseas contingency operations for its activities in Afghanistan. Table 1 shows the funding available for the Task Force from fiscal year 2007 through fiscal year 2011. The NDAA for fiscal year 2011 required that State, DOD, and USAID jointly develop a plan to transition the activities of the Task Force to State, with a focus on potentially transitioning activities to USAID. The plan, which was to be submitted to Congress at the same time as the President’s fiscal year 2012 budget, was to describe (1) the Task Force’s activities in Afghanistan in fiscal year 2011; (2) the Task Force’s activities in fiscal year 2011 that USAID will continue in fiscal year 2012, including those activities that may be merged with similar USAID efforts; (3) any of the Task Force’s fiscal year 2011 activities that USAID will not continue and the reasons; and (4) those actions that may be necessary to transition Task Force activities that will be continued by USAID in fiscal year 2012. The NDAA also required the President, acting through the Secretary of Defense and the Secretary of State, to submit a report on an economic strategy for Afghanistan by July 6, 2011. Furthermore, the NDAA required the Secretary of Defense to submit a report describing the Task Force’s activities and how these activities support the long-term stabilization of Afghanistan by October 31, 2011. The fiscal year 2011 NDAA required that State, DOD, and USAID jointly develop a plan to transition the activities of the Task Force in Afghanistan to State. As of June 2011, the plan had not been submitted. Officials from DOD, State, and USAID told us that they are continuing to discuss the options for and timing of any transition and developing a response to satisfy the requirement for a plan in the fiscal year 2011 NDAA. According to USAID officials, to plan for any transition, they would need detailed information about the Task Force activities, such as project objectives, timelines, costs, contracting, and actual results. To identify factors to consider in planning for any transition of Task Force capabilities from DOD to USAID, we interviewed DOD, State, and USAID senior-level policy officials in Afghanistan and Washington, D.C. We obtained their views on the respective capabilities and operational approaches of the Task Force and USAID and reviewed relevant and available documentation. As a result, we identified five factors to consider in planning for any transition, which generally relate to how these agencies conduct their respective activities. Approaches to economic development. Although we identified some overlap in the roles of the Task Force and USAID, since both entities work to promote economic development in Afghanistan, they generally take different approaches to achieve their goals. In particular, USAID officials noted that in addition to other activities, USAID focuses more broadly on efforts to improve the environment for investments whereas the Task Force focuses on brokering specific investment deals. Specifically, the Task Force was designed to be a small, flat, flexible organization that generally conducts short-term initiatives in various sectors of the Afghan economy. For example, the Task Force is building a raisin processing facility in Kandahar to process raisins for export. It also facilitated meetings for Sweet Dried Fruit, the largest U.S. importer of raisins, to purchase Afghan raisins for the U.S. market. USAID is a larger development agency operating in many sectors ranging from infrastructure construction to capacity building as well as promotion of private sector development, both in the short and long term. For example, USAID worked with ministries to develop public administration and management capacity to foster government reform and establish the conditions for economic development. In addition, USAID generally focuses on both small and large infrastructure projects, ranging from small health clinics to agricultural colleges to roads and power plants. Furthermore, in some cases, USAID and the Task Force work in the same sectors of Afghanistan, but U.S. development officials in Afghanistan do not consider Task Force projects to be duplicative of USAID efforts. For example, USAID officials noted that USAID and the Task Force are both involved in the Afghan mining sector. USAID is focused on improving the regulatory policies to promote mining sector development and attract private sector investment through conferences, while the Task Force is focused on collecting and collating mining data with the U.S. Geological Survey, developing detailed investment proposals, and identifying and attracting investors. Freedom of movement. According to USAID, State, and DOD officials, Task Force employees have greater freedom of movement than USAID employees because the Task Force employees operate outside of Chief of Mission authority and therefore are not required to follow the security protocols of the U.S. Embassy in Kabul’s Regional Security Officer. In addition, the Task Force maintains its own security detail and is a DOD entity. As a result, Task Force employees have an increased ability to directly implement and oversee its projects, greater access to military assets, and flexibility to host potential investors. USAID employees operate under Chief of Mission authority and are subject to more restrictions on their movements. As GAO has previously reported, movement restrictions affect the ability of USAID employees to directly implement and oversee USAID’s projects. USAID headquarters officials noted that USAID uses implementing partners to carry out some of its projects and that they operate outside Chief of Mission authority. Senior State headquarters and USAID and State embassy officials said that lessening restrictions on USAID movement would require an exemption from the Regional Security Officer’s policy by the Ambassador and State’s Under Secretary for Management and would be challenging in the current security environment in Afghanistan. Furthermore, given the location and security requirements related to some of the Task Force’s work, such as mining, a memorandum of understanding between USAID and DOD might be necessary to provide USAID employees greater access to military security and transportation assets if Task Force activities are transitioned to USAID. USAID funding and staffing. USAID’s fiscal year 2011 budget and fiscal year 2012 budget request did not take into account any needs to support Task Force activities. However, USAID headquarters officials noted that if a transition were to occur they have flexibility to reprogram funds to accommodate the Task Force projects selected for transition. To continue Task Force activities, senior-level embassy and USAID officials in Afghanistan also identified potential staffing challenges. For example, the Task Force consists of individuals with private sector expertise and business contacts who have agreed to live and work under the Task Force’s current security arrangement in Afghanistan (e.g., outside Chief of Mission authority) and are comfortable with the way the Task Force operates. According to USAID officials, many of its employees also have private sector experience and business contacts, but they live and work under a different security arrangement (e.g., under Chief of Mission authority). Embassy personnel stated that because of differences in the way the two agencies approach their activities, it may prove challenging for USAID to attract employees with the same expertise to broker investment deals as currently exists within the Task Force. Facilitating private investment in Afghanistan. While both USAID and the Task Force facilitate private investment, the nature and focus of their interactions with investors differ. For example, the Task Force identifies and provides direct logistical and consultative support to U.S. and non- U.S. potential investors. Such support includes advising companies on investment opportunities, arranging access to Afghan business leaders and officials, and providing temporary housing, transportation, and office space while investors evaluate opportunities and set up their own operations. The Task Force has hosted major international corporations and investors in Afghanistan, including Citibank, IBM, JP Morgan, Sweet Dried Fruit, Case New Holland, and Harrods of London. With respect to facilitating private investment, USAID typically hosts conferences that are designed to attract businesses or share information. Given their differences in approach, interaction with investors, and flexibility to move around, as previously discussed, senior USAID and State officials in Afghanistan agreed that these investment activities currently conducted by the Task Force may not continue if a transition to USAID occurs. Timing of transition and linkage to U.S. objectives in Afghanistan. Task Force activities in Afghanistan are intended to support objectives associated with the revised U.S Integrated Civilian-Military Campaign Plan for Support to Afghanistan. The plan has several objectives associated with U.S. goals and with the International Security Assistance Force’s lines of operations, including “Advancing Livelihoods and Sustainable Jobs.” Under this objective, the United States seeks to increase the productivity of small and medium-sized enterprises and promote domestic and foreign private sector investment in Afghanistan into 2012. Because the Task Force is involved in various efforts to spur private investment, senior-level DOD, State, and USAID officials in Afghanistan have stated that a transition in the near term may negatively impact these efforts, which are deemed essential for the transition of U.S. forces out of Afghanistan. To guide Task Force activities, DOD’s senior leadership and the Task Force Director have provided high-level, general direction to Task Force activities; however, the Task Force has not developed written guidance to be used by its personnel in managing Task Force projects. In addition, while interagency information-sharing mechanisms exist in Afghanistan, the Task Force does not routinely participate in these mechanisms, nor have DOD, State, and USAID determined how to integrate the Task Force into these information-sharing efforts. DOD’s senior leadership and the Task Force Director have provided high- level, general direction for Task Force activities, such as broad goals, an operating philosophy, and management practices. However, the Task Force has not developed written guidance to be used by its personnel in managing Task Force projects. Such guidance could include elements such as project selection criteria, requirements to establish project metrics, monitoring and evaluation processes, and the type of project information that should be collected and documented. DOD and the Task Force have issued various memorandums that have broadly guided the Task Force’s activities. For example, the Task Force’s mission and goals were established through three memorandums issued by the Deputy Secretary of Defense and Secretary of Defense over the time period from 2006 to 2010. The June 2006 memorandum stated, for example, that the Task Force was to accelerate DOD’s stabilization and reconstruction operations through economic development activities in Iraq and Afghanistan. Additionally, in December 2009, the Director of the Task Force issued a management memorandum outlining the Task Force’s operational model, which mentioned that the Task Force has been successful because it is designed to flexibly respond to the dynamic operating environments of Iraq and Afghanistan while combat operations were ongoing and emphasized the necessity of field-based project management. Task Force officials stated that they use various practices to manage activities, such as holding periodic internal management meetings to review plans and monitor project implementation. In addition, we found that Task Force officials also maintain some project information. Based on our discussion with Task Force officials and our review of Task Force documentation, we confirmed that some of the information contained in the project files included project descriptions, goals, objectives and metrics, contract information, and financial information. We found that the level of detail on the project information maintained by the Task Force varied, such as for data on cost, status, and metrics. For example, the Task Force’s project files on its factory restart efforts in Iraq included detailed information such as cost and project status, and such data were updated periodically. In contrast, project files on the Task Force’s project documentation on its agricultural assessment activities contained related final reports, but the documentation did not contain information on cost and only one report contained schedule information. Furthermore, the Task Force’s electronic fund transfer assistance center in Iraq tracked metrics such as the number of problems reported and the causes of the problems. In contrast, Task Force project documentation on its private investment facilitation efforts in Iraq did not have clearly defined metrics. Neither DOD memorandums nor the Director’s memorandum describing the Task Force’s operational model outline specific guidelines for project management, such as project selection criteria, requirements to establish project metrics, monitoring and evaluation processes, or how program managers should maintain project information. Standards for Internal Control in the Federal Government requires agencies to document guidance to help manage agency activities but allows agencies to tailor control activities. According to the standards, written guidance that directs project management is an integral part of an agency’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. We also note that two assessments of the Task Force’s activities have identified a need for project guidance. First, in 2009, the Task Force appointed an assessment team to evaluate its activities to restart state-owned factories in Iraq. This assessment team stated, among other things, that the lack of project documentation made it difficult to gain a clear understanding of the Task Force’s operating environment. In addition, the assessment team noted that the Task Force should consider developing standard processes and procedures for internal controls and a standard repository for project reporting. Second, the Task Force conducted an internal assessment and released the findings in February 2009, which noted that basic managerial structure and processes were lacking to ensure continuity of operations and that it would issue new guidelines for operational management. According to Task Force officials, the December 2009 memorandum outlining the Task Force’s operational philosophy was issued in response to this internal review. However, it did not contain specific management guidelines, and no other guidance has been issued. Senior Task Force officials told us that they have recognized the need to establish project management guidance; however, they stated that taking this action was not a priority because at times the future of the Task Force was uncertain. For example, from late 2008 through March 2009, it was unclear whether the Task Force would be reauthorized by the Secretary of Defense to continue activities in Iraq. As a result, a large number of Task Force staff left the organization, and when the Task Force was reauthorized in March 2009, senior Task Force officials stated that it only had three permanent staff members and had to recruit additional staff. During the time of organizational uncertainty, members of the Task Force were focused on completing projects in Iraq and were not, according to Task Force officials, focused on developing and documenting guidance and policies. DOD Instruction 3000.05 states that integrated civilian and military efforts are essential to conducting successful stability operations and requires DOD and its components to collaborate with other U.S. government agencies, among other organizations, involved with the planning, preparation, and conduct of stability operations. The 2010 DOD Task Force memorandum also requires it to coordinate with relevant U.S. government agencies for executing assignments in theater, as appropriate. In addition, according to the DOD Joint Publication on counterinsurgency operations, coordination and/or integration of military efforts with other governmental or nongovernmental efforts to achieve a whole of government approach is essential for successful counterinsurgency operations, which include stabilization efforts to foster economic stability and development. The Task Force has generally focused its information-sharing efforts on the senior U.S. official level in Afghanistan. According to Task Force officials, they regularly brief senior-level U.S. military and civilian officials, such as the Commander of the International Security Assistance Force and U.S. Forces-Afghanistan, the Ambassador to Afghanistan, and the Special Representative for Afghanistan and Pakistan, on the activities and projects of the Task Force in Afghanistan. Senior Task Force officials stated that they have also shared information on their activities and projects with the USAID Mission Director and the Coordinating Director of Development and Economic Affairs at the U.S. Embassy in Kabul, Afghanistan. While the Task Force regularly shares information with senior leaders, its information sharing at the project management level in Afghanistan has been more ad hoc. Several civilian development officials in Afghanistan expressed concerns about the inconsistency of information sharing by the Task Force. For example, according to USAID officials in Afghanistan information sharing between USAID and the Task Force has generally been limited and irregular. However, development officials also stated that coordination with the Task Force was generally better on joint projects, such as on a cement factory revitalization project in Parwan province. Task Force officials agreed that information sharing below the senior level is on an ad hoc basis and noted that they expected senior leaders they briefed to share information from the Task Force with appropriate staff within their own organizations. Task Force officials believe that they have interacted frequently below the senior level but acknowledged that there have been gaps in the Task Force’s information sharing and improvements could be made. While mechanisms such as interagency working groups exist in Afghanistan for agencies involved in development activities to share information, the Task Force does not routinely participate in these mechanisms nor have DOD, State, and USAID determined how to integrate the Task Force into these information-sharing efforts. The Task Force has been required to more formally share information on its projects and activities through other processes in the past, but these processes were either onetime requirements or are no longer applicable. For example, the NDAA for fiscal year 2011 required the Task Force to obtain the concurrence of the Secretary of State for its planned fiscal year 2011 projects in Afghanistan. State officials said that the concurrence process generally improved the visibility of Task Force activities in Afghanistan. The Task Force was also required to share information on its activities and projects in Afghanistan as part of the Commander’s Emergency Response Program (CERP). The Task Force used the program to implement some of its fiscal year 2010 projects in Afghanistan and had to meet the program’s requirements, which included a review process that involved USAID and U.S. military officials. However, pursuant to the NDAA for fiscal year 2011, the Task Force is no longer able to use CERP to implement its projects. Currently, a number of interagency working groups have been established to share information regarding various aspects of development. For example, there are interagency working groups directed by the Coordinating Director of Development and Economic Affairs at the U.S. Embassy in Kabul that are involved with development in Afghanistan. For example, the Economic and Financial Policy Working Group is responsible for implementing the U.S. economic growth strategy for Afghanistan. Embassy, USAID, and Task Force officials have stated that the Task Force does not regularly attend the working group’s biweekly meetings. Another mechanism mentioned in our prior work is the Combined Information Data Network Exchange used by the U.S. military to track CERP projects. This database included information on CERP projects; an unclassified version of the database is accessible by USAID and other organizations. However, agency officials have not agreed on the most appropriate mechanisms to use and the level of participation for the Task Force. Senior embassy officials stated that improved information sharing by the Task Force would help with unity of effort and that a mechanism to facilitate information sharing would be useful. Development officials have also noted the importance of improving information sharing by the Task Force to ensure that all U.S. government development projects in Afghanistan are coordinated to support the U.S. economic strategy. Furthermore, our prior work has highlighted the need to improve information sharing between agencies working on development in Afghanistan, particularly USAID and DOD, to improve coordination. Strengthening the Afghan economy through stabilization and development assistance efforts is critical to the counterinsurgency strategy and a key part of the U.S. Integrated Civilian-Military Campaign Plan for Support to Afghanistan. To support U.S. goals in Afghanistan, DOD’s Task Force and USAID both undertake efforts that promote economic development, including facilitating private sector investment. While the two organizations are similarly focused on stabilizing and developing Afghanistan’s economy, some differences exist in the way they carry out their projects and activities. Therefore, factors such as their respective approaches to economic development, ability to move around, and the types of activities they undertake to identify investment opportunities and interact with potential U.S. and non-U.S. investors are important considerations in planning for any transition. Written guidance is a key element that can help agencies manage their activities and establish internal controls. Without formally defined project management guidance, the Task Force does not have the framework needed to ensure a standard operating approach and consistent project management. In addition, the absence of such guidance makes it more difficult to ensure accountability among its employees, minimize the potential for waste and abuse, monitor and evaluate project effectiveness, and ensure a smooth transition as personnel join or leave the Task Force. Finally, whereas the Task Force, like other agencies operating in Afghanistan, has projects and activities that focus on economic development, improving efforts to share information could identify opportunities for synergy and to avoid duplication. Without an agreed- upon approach to more fully integrate the Task Force into existing information-sharing mechanisms in Afghanistan, DOD, State, USAID, and other agencies will not be in a position to fully leverage and coordinate their respective capabilities and efforts in support of achieving U.S. economic development goals. To ensure effective project management, oversight, and accountability, we recommend that the Secretary of Defense direct the Task Force to develop written guidance that documents, as appropriate, its management processes and practices, including elements such as criteria for project selection, requirements for establishing metrics and project documentation, and project monitoring and evaluation processes. To improve information sharing among the Task Force and other federal agencies involved with stabilization and economic development efforts in Afghanistan, we recommend that the Secretary of Defense in consultation with the Secretary of State and the Administrator of USAID determine the most appropriate mechanism for integrating Task Force participation. Such mechanisms could include formalizing the process previously used to obtain State concurrence on Task Force projects, participating in appropriate working groups in Afghanistan, and/or including Task Force project and activity information in existing databases. We provided a draft of this report to the DOD, State, and USAID. DOD and USAID provided written comments, which are reprinted in appendixes II and III, respectively. State provided oral comments on the draft. DOD and USAID also provided technical comments, which we incorporated where appropriate. In its comments, DOD partially concurred with our recommendation that the Secretary of Defense direct the Task Force to develop written guidance that documents, as appropriate, its management processes and practices. DOD stated that it encourages this practice and noted that the Secretary of Defense has issued the necessary directives and instructions to DOD components, including the Task Force, on the development of project management guidelines. DOD further stated that the Task Force is reviewing its program management processes and will consider how to implement our recommendation, to the extent practicable. Both DOD and State concurred with our recommendation that the Secretary of Defense in consultation with the Secretary of State and the Administrator of USAID determine the most appropriate mechanism for integrating Task Force participation in information-sharing efforts in Afghanistan. DOD stated that it has reached agreement with the senior leadership of State and USAID to enhance coordination and information sharing of Task Force activities. According to a Task Force official, the details of this agreement are being finalized and will be discussed in the forthcoming response to the fiscal year 2011 NDAA requirements. State noted that we had adequately captured the need for increased coordination, communication, and information sharing. In its comments, USAID expressed its view that overall the report contained inaccuracies and misrepresentations that need to be corrected. USAID also made several statements regarding the objectives of our report. Specifically, USAID asserted that our report addressed the issue of whether Task Force activities should continue to reside in DOD or be transferred to another agency. USAID further noted that the report makes no recommendation as to a transfer of activities, but believed our recommendation to strengthen internal Task Force procedures and processes seemed to acknowledge the continued existence of the Task Force, and our reluctance to recommend consolidation of Task Force activities stems from a lack of understanding of how USAID operates. It believed this lack of understanding was reflected in our discussion of the five factors to be considered in planning for any transition. Specifically, USAID cited our discussion of the differing approaches of the Task Force and USAID to economic development, stating that our report describes USAID as focusing on improving the environment for investments while the Task Force focuses on brokering specific investment deals. USAID stated that it does not focus only on improving the environment for investments, noting that it has one project with this goal and several projects that focus on other areas of investment, including brokering specific deals. In addition, USAID stated that our report notes that the Task Force has an advantage over USAID because it has greater flexibility to visit project sites and access to the military. USAID noted that both USAID and the Task Force use contractors to implement projects, who have different and fewer security and movement restrictions than U.S. government employees. It specifically stated that USAID-employed Afghans and contractors can access all areas. We disagree that our report contains inaccuracies and misrepresentations, and believe that USAID has mischaracterized the intent of our work. Our objectives, as stated in the report, were to identify factors that should be considered in planning for any potential transition of Task Force capabilities to USAID. We did not evaluate whether such a transfer should occur, and therefore make no recommendation to that effect. We disagree that our recommendation regarding the need for the Task Force to develop project management guidelines suggests the continued existence of the Task Force. Rather, such a framework will be necessary regardless of whether the Task Force continues to reside in DOD or transfers to another agency. We also disagree with USAID’s description of certain information in our report. Specifically, with respect to USAID’s approach to economic development, our report does not state that USAID only focuses on improving the environment for investment. Rather, we specifically discuss that USAID operates in many sectors in Afghanistan ranging from infrastructure construction to capacity building as well as promotion of private sector development, both in the short and long term. In particular, we note that USAID activities include sponsoring conferences where prospective investors have the opportunity to gather information about potential investment opportunities. Finally, we do not pass judgment on whether the Task Force has an advantage over USAID with respect to freedom of movement, but rather point out the conditions under which employees of the two agencies conduct their activities, such as whether they are subject to Chief of Mission authority. We also specifically discuss that USAID uses contractors to help implement its projects, and that these contractors have access to project sites. In light of USAID’s comments, we have clarified the report text to more clearly identify the instances in which we are referring to direct employees compared to contractors. USAID also commented on our recommendation that the Secretary of Defense in consultation with the Secretary of State and the Administrator of USAID determine the most appropriate mechanism for integrating Task Force participation in information-sharing efforts in Afghanistan. Specifically, it agreed with the need for more and more effective information sharing but believed that our recommendation fell short of addressing the need for full integration of stabilization and development activities across the federal government. USAID noted that information sharing is not enough if the U.S. government is to efficiently plan, manage, and integrate multiple development projects from different agencies in overlapping sectors or ministries. It emphasized that active senior management direction and support from the Task Force, along with State and USAID, are required for effective integration of planning and project execution, and that consolidation of Task Force and USAID activities would go even further to ensure that activities are fully integrated and that gaps or duplication do not occur. In particular, USAID proposed that we expand our recommendation on information sharing to require that the Task Force’s project portfolio management become more institutionalized and integrated into State and USAID planning and project reporting processes. We agree with USAID’s comments regarding the need for greater integration of U.S. activities, and believe that our recommendation supported by other information contained in our report specifically conveys this intent. In particular, our conclusions state that without an agreed-upon approach to more fully integrate the Task Force into existing information-sharing mechanisms in Afghanistan, DOD, State, USAID, and other agencies will not be in a position to fully leverage and coordinate their respective capabilities and efforts in support of achieving U.S. economic development goals. We also note that in presenting our recommendation, we identify various options for DOD, State, and USAID to consider for achieving better information sharing and integration, including formalizing the process used to obtain State concurrence on Task Force activities. We note that this process, when used in the past, has involved both State and USAID review of Task Force activities. We are sending copies of this report to the Secretary of Defense, the Secretary of State, and the Administrator of the U.S. Agency for International Development. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9619 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. We began our review of the Department of Defense’s (DOD) Task Force for Business and Stability Operations (Task Force) under the authority of the Comptroller General of the United States to conduct work on his own initiative. The Joint Explanatory Statement accompanying the Ike Skelton National Defense Authorization Act for Fiscal Year 2011 recognized GAO’s ongoing review and directed GAO to include some additional information in its report. This report (1) identifies factors to consider in planning any transition of Task Force capabilities to the U.S. Agency for International Development (USAID) and (2) evaluates the extent to which the Task Force had established guidance to manage its activities and shared information with other U.S. civilian agencies. In our discussion of factors, we included information on the relationship between Task Force activities and the U.S. Integrated Civilian-Military Campaign Plan for Support to Afghanistan. To identify factors to consider in planning any transfer of Task Force capabilities to USAID, we interviewed cognizant DOD, Department of State (State), and USAID senior-level policy officials, including officials at the U.S. Embassy in Kabul. At the U.S. Embassy in Kabul, we interviewed the Coordinating Director for Development and Economic Affairs and officials in the Economic Section, including the Economic Counselor; the Interagency Agriculture Team; and the Civilian-Military Plans and Assessments Team. We also interviewed USAID officials in Afghanistan, including the Mission Director in Afghanistan and officials in the Office of Economic Growth and Governance; the Office of Infrastructure, Engineering, and Energy; and the Stabilization Unit. During our interviews, we specifically obtained these officials’ views on the respective capabilities and operational approaches of the Task Force and USAID and reviewed relevant and available documentation. To determine how the Task Force activities support the U.S. Integrated Civilian-Military Campaign Plan for Support to Afghanistan, we reviewed the 2009 and 2011 versions of the plan, as appropriate, to determine what campaign objectives Task Force activities support and interviewed relevant agency officials in both Washington, D.C., and Afghanistan. To evaluate the extent to which the Task Force has established guidance to manage its activities, we reviewed documentation describing the Task Force’s operating approach, projects and activities, performance goals and measures, and budget submissions and security protocols. We compared this information to requirements for documentation contained in our internal control standards and prior work related to management and evaluation. To evaluate the extent to which the Task Force shared information on its activities with other civilian agencies involved with economic stabilization efforts in Afghanistan, we reviewed DOD guidance, such as DOD Instruction 3000.05, and National Security Presidential Direction 44, to determine coordination requirements. We also interviewed officials from DOD, State, USAID, and the U.S. embassies in Baghdad and Kabul to identify the types of information shared and any processes used to share information. We focused this portion of our review on the information-sharing activities and practices in Afghanistan because the Task Force ceased its operations in Iraq in January 2011. We conducted this performance audit from August 2010 through July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Carole Coffey, Assistant Director; Johana Ayers; Carolynn Cavanaugh; Burns Chamberlain; Nicole Harms; Mae Jones; Anne McDonough-Hughes; Jamilah Moon; Marcus Oliver; Michael Pose; and Michael Rohrback made key contributions to this report.
The Departments of Defense (DOD) and State (State) and the U.S. Agency for International Development (USAID) and others are involved in economic development activities in Iraq and Afghanistan. In June 2006, DOD established the Task Force for Business and Stability Operations (Task Force) to support its related efforts. The National Defense Authorization Act (NDAA) for Fiscal Year 2011 required that DOD, State, and USAID jointly develop a plan to transition Task Force activities to State, with a focus on potentially transitioning activities to USAID. Under the authority of the Comptroller General of the United States to conduct work on his own initiative and with additional congressional direction, GAO identified (1) factors to consider in planning any transition of Task Force activities and (2) the extent to which the Task Force established guidance to manage its activities and has shared information with other federal agencies. GAO analyzed documents and interviewed multiple agency officials in Washington, D.C., Iraq, and Afghanistan. As of June 2011, DOD, State, and USAID officials were discussing options for transitioning Task Force activities and preparing a response to the fiscal year 2011 NDAA requirements. Based on interviews with senior officials and a review of available data, GAO identified five factors to consider in planning for any transition of Task Force activities to USAID, which generally relate to how these agencies conduct their respective activities. First, although both the Task Force and USAID work to promote economic development, they generally take different approaches. The Task Force is a small, flat, flexible organization that generally conducts short-term initiatives, while USAID is a large agency that conducts short- and long-term projects. USAID officials noted that in addition to other activities, it focuses on efforts to improve the environment for investments whereas the Task Force focuses on brokering specific investment deals. Second, as part of DOD, Task Force employees are not subject to the same movement restrictions as USAID employees and have greater flexibility to visit project sites and access to military assets. Third, funding and staffing plans would need to be developed. For example, USAID's fiscal year 2011 budget and 2012 budget request did not take into account any needs to support Task Force activities. Fourth, while both agencies facilitate private sector investment, the nature and focus of their interactions with investors differ. For example, the Task Force actively identifies potential U.S. and non-U.S. investors and arranges meetings and provides logistical support for them, whereas USAID typically sponsors conferences to provide opportunities for prospective investors to share information. Given these differences, State and USAID officials agreed that the same type of private investment activities conducted by the Task Force may not continue at USAID. Last, the timing of a transition and impact on U.S. objectives will need to be considered. DOD, State, and USAID officials noted that because Task Force activities are important to supporting the U.S. goal of attracting investors, a transition in the near term may negatively impact these efforts. While DOD and the Task Force have provided high-level direction for Task Force activities, the Task Force has not developed written project management guidance to be used by its personnel in managing Task Force projects. Such guidance could include important elements, such as project selection criteria, requirements to establish metrics, and monitoring and evaluation processes. As a result, the Task Force does not have the framework needed to ensure a standard operating approach, accountability, and consistent project management. The Task Force has generally focused its information-sharing efforts on senior officials in Afghanistan whereas efforts at the project management level have been more ad hoc. Mechanisms such as working groups exist for agencies involved in development activities to share information. However, the Task Force does not routinely participate, and DOD, State, and USAID have not identified how best to integrate the Task Force to share information on its activities. As a result, the U.S. government may not be positioned to fully leverage and coordinate its respective capabilities and efforts in support of achieving U.S. goals. GAO recommends that the Task Force develop written project management guidance and that DOD, State, and USAID develop an approach to integrate the Task Force into information-sharing mechanisms. DOD partially concurred with the first recommendation. The three agencies generally concurred with the second.
CMS administers the 1-800-MEDICARE help line to answer beneficiaries’ questions about Medicare eligibility, enrollment, and benefits. The help line currently operates 24 hours a day, 7 days a week, and has eight call center locations that are run by two contractors. As of October 2004, the primary contractor managed 1,332 of the 2,137 CSRs and operated seven of the eight 1-800-MEDICARE call centers. In addition, the primary contractor is responsible for other activities, such as distributing program material requested by callers, training all new CSRs before they handle calls on the 1-800-MEDICARE help line, and researching answers to more complex questions some callers may have. Prior to 2004, one contractor managed the 1-800-MEDICARE help line. In June 2004, in response to increasing call volume, CMS hired a second contractor, which in October 2004 managed 805 CSRs and operated one of the eight 1-800-MEDICARE call centers. A call placed to 1-800-MEDICARE is answered initially by an interactive voice response system. This is an automated system that, depending on the caller’s responses to the system’s automated prompts, routes a call to a CSR or to other information sources. These other information sources can include the other help lines maintained by Medicare’s claims administration contractors or recorded information. All CSRs must have a high school diploma or its equivalent, but they are not required to be knowledgeable about the Medicare program at the time they are hired. To help provide clear, accurate, and timely answers to callers’ questions, CMS expects the CSRs to use written scripts, which contain information about the program. CSRs listen to a caller’s question and then type in related keywords to generate a list of suggested scripts that could be used to answer the question. The CSRs select the script they consider best suited to answer the question and read either excerpts or the entire script. CSRs can also consult other information sources. For example, CSRs can use Web-based tools available on the Medicare Web site to help beneficiaries select a prescription drug discount plan, nursing home, or home health agency. Because the types of questions frequently posed to 1-800-MEDICARE change in response to program or other policy changes, new scripts may need to be created or existing ones updated. Either CMS or the primary contractor may decide to develop a new script or update an existing one for clarification or in response to program changes. CMS officials approve scripts that are developed by the primary contractor, and check them for accuracy and completeness. The 1-800-MEDICARE help line provided accurate answers to 61 percent of the 420 calls we made. The accuracy rate varied significantly among the six questions we posed. Overall, 29 percent of our calls were answered inaccurately. In general, CSRs erred because they did not understand enough about the Medicare program to access the script with information to answer the question or clearly explain the material in it. In addition, for 10 percent of the calls we placed, we were unable to get a response to our question at the time we contacted the 1-800-MEDICARE help line, mainly due to problems when CSRs transferred calls. In response to our calls to the 1-800-MEDICARE help line, CSRs answered our questions accurately for 256 out of 420 calls, a rate of 61 percent. The criteria we developed to determine the information that constituted an accurate answer for each question are shown in table 1. (A more detailed version of the questions and information to answer them are in apps. II through VII.) The criteria were based on answers we developed from information on the Medicare Web site and were confirmed by CMS, which provided us with the scripts that contained information to answer the questions. We considered all calls we placed to the 1-800-MEDICARE help line to be part of our test of its accuracy, even if the call was transferred to a claims administration contractor to provide the answer. The percentage of calls CSRs answered accurately varied by question, as shown in figure 1. For example, CSRs accurately answered 81 percent of the calls asking whether a beneficiary could receive a prescription drug discount card if they had a Medigap policy. The answer to the Medigap question was clearly described in a script, which allowed CSRs to respond with the highest accuracy rate for all of our questions. Similarly, for question 1—choosing a prescription drug discount card—CSRs answered accurately 76 percent of the time. By July 2004, when we placed our calls, a large number of CSRs had been recently hired and trained specifically to answer this question, using a script and a Web-based tool. In contrast, for question 2 calls about the $600 prescription drug credit, CSRs answered inaccurately 79 percent of the time. CSRs scored poorly on this question primarily because they based their answers on the beneficiary’s total income without considering that some specific types of income should not be included in the calculation of eligibility for the credit. CSRs would have had to access two scripts to correctly answer the question, because the more general script on the topic did not contain all of the needed information. Question 5, which addressed Medicare part B enrollment, also had a relatively high inaccuracy rate—41 percent. We were not able to obtain an answer to some of our questions at the time that we called, most commonly when CSRs or the interactive voice response system transferred calls concerning questions 4 and 6 to other help lines. CSRs responded inaccurately to our questions largely because they did not seem to know enough to effectively use the scripts. According to a CMS official and the primary contractor’s staff, CSRs are expected to use scripts to guide their discussion with callers; they are not supposed to rely solely on acquired knowledge of Medicare to answer questions. We found, however, that in responding to our questions CSRs usually had one of four problems using scripts. The CSRs (1) did not seem to access a script, even when one was available; (2) did not seem to access a script with the right information to answer the question; (3) did not obtain enough information from the script; or (4) misunderstood some of the words in the scripts. We found instances when CSRs did not seem to access scripts when responding to calls. For example, when responding to our calls concerning the prescription drug discount card question, 2 CSRs indicated that they were not able to inform the caller about which card had the lowest drug prices—even though 53 other CSRs successfully used a script and a Web- based tool to answer this question. One other CSR referred our caller to AARP for an answer, rather than respond with the appropriate script and Web-based tool. These 3 CSRs did not seem to know how to correctly answer this question, which was addressed by one of the most commonly accessed script for the first half of the year. During 20 of the calls to answer our question on whether a spouse should enroll in Medicare part B if he had current employment-based health insurance, CSRs told our callers that enrolling in Medicare was a personal decision and they could not answer the question, which we classified as an inaccurate answer. They did not seem to recognize that they could access a script that contained information designed to answer that question. CSRs sometimes seemed to be accessing the wrong script to answer our question. For example, in answering the question on whether a beneficiary could receive a prescription drug discount card if she had a Medigap policy, one CSR incorrectly stated that the caller needed to complete a survey before receiving an answer. There is a script available that provides the answer to the Medigap question, but the script does not mention a survey. This CSR seemed to be using a different script about the prescription drug discount card, which has the right information to answer our question on the best prescription drug discount card to choose. In some cases, CSRs did not obtain enough information from the scripts they were using to accurately answer the question we asked. For example, to answer our question concerning whether a beneficiary could qualify for the $600 credit toward prescription drug purchases, the CSR should consider the source, as well as the amount of the beneficiary’s income. Some sources of income are not counted in determining a beneficiary’s eligibility for the $600 credit. According to CMS, to answer this question accurately the CSR would have to access two different scripts. The first script provides general information about the $600 credit and refers CSRs to the second script, which lists the sources of income that are not included in the eligibility calculation. However, the CSRs who answered this question incorrectly in 55 calls—or 79 percent of the time—focused on the total amount of income and did not seem to seem to consider the sources of the income or to access and use information from the second script. In 14 of the calls—or 20 percent of the time—CSRs were able to answer this question correctly, because they did consider the sources and amounts of income that we indicated the beneficiary had. Finally, CSRs sometimes misinterpreted or did not understand the words they were reading from the scripts or other written materials. For example, to answer our Medigap question, a CSR incorrectly told the caller that the beneficiary would automatically receive a prescription drug discount card if enrolled in a Medigap plan. The CSR may have been confusing Medigap policies with Medicare managed care plans, because both are discussed in the script that answered this question. In another example, for our question related to power wheelchair coverage, a CSR misread the requirement that a beneficiary must have adequate trunk—or upper body—strength. The CSR mistakenly informed us that a Medicare beneficiary needs to have adequate “trunk space” in order to qualify for a power wheelchair. When we asked for clarification, the CSR stated that Medicare requires a qualifying beneficiary to have adequate trunk space in his or her car to hold a power wheelchair. Similarly, during one of our calls about eye exam and glasses payment, the CSR informed us that an eye exam would be covered and then stated, “the only part of the exam that is not covered is ‘refraction,’ but I don’t know exactly what that is.” Because the CSR did not understand that a typical eye exam would be considered a refraction, she gave the caller the incorrect impression that Medicare would pay for a routine eye exam. CMS and contractor staff acknowledged that scripts for the 1-800- MEDICARE help line are not routinely pretested to ensure that both the CSR and the caller can understand the script before it is used to answer callers’ questions. On occasion, the 1-800-MEDICARE contractor has obtained CSRs’ feedback on draft scripts before they are used on the 1-800-MEDICARE help line to ensure that scripts can be easily read and understood. But this is not done as a routine step before new and revised scripts are used in handling calls. In addition, even if the CSRs consider the script understandable, it may still be confusing to Medicare beneficiaries. We found that pretesting to ensure that written material is understandable to its intended audience is a standard practice used to develop effective communications materials. For example, prior to issuing the first Medicare & You handbook nationwide to beneficiaries, CMS conducted consumer testing of its publication to evaluate its effectiveness as a communication tool. CMS has revised subsequent editions of the handbook to make it easier to read and use, based on feedback from beneficiaries. Moreover, other HHS agencies, such as the Centers for Disease Control and Prevention and the Substance Abuse and Mental Health Services Administration, have developed guidance on steps for ensuring that written material is understandable for intended readers and pretesting the materials before use. We did not receive answers to our questions for 10 percent (42) of the 420 calls we placed at the time we originally called. Several reasons accounted for this, as table 2 shows. For half (21) of the unanswered calls, the CSRs or the interactive voice response system transferred the calls placed during morning, evening, and weekend hours to claims administration contractors that were not open for business at the time of our call. Although the 1-800-MEDICARE help line is open 24 hours a day, 7 days a week, these other help lines are not. The transferred calls pertained to our questions concerning Medicare coverage about power wheelchairs and eye exams and glasses. The 1-800-MEDICARE CSRs or the interactive voice response system transfer such questions to the claims administration contractors’ help lines because these contractors generally have greater knowledge about Medicare coverage issues. Once our calls were transferred to closed help lines, we generally heard recordings that stated the contractors’ regular business hours and suggested calling back at that time. However, for 7 of those 21 calls, the contractors’ recorded messages did not provide a telephone number to use to call back during the stated business hours. The second most common reason we did not receive answers to our calls was that our calls were disconnected. Sixteen of the 42 unanswered calls were disconnected. For example, calls were disconnected before we were able to obtain a response to our questions. In one instance, the call was placed on hold for 30 minutes and then was disconnected. Four calls made on the same day did not receive a response because computer maintenance prevented the CSRs from accessing the Web-based tool they needed to use to answer our question about the prescription drug discount card. Finally, one other call was unanswered because the call was routed to a wrong telephone number. As required by CMS, both newly hired and experienced CSRs receive training to help them answer questions posed on the 1-800-MEDICARE help line. The training for newly hired CSRs includes instruction on accessing and using scripts, customer service etiquette, and information about the Medicare program. As part of the training, CMS requires newly hired CSRs to score 90 percent or higher on a written exam before they handle calls on the help line. All CSRs also receive continuing training, and take written quizzes on the new material. Although the 1-800-MEDICARE contractors met CMS’s training requirements by providing instruction and testing, the testing does not fully measure CSRs’ ability to accurately answer real questions from callers. The primary contractor develops and conducts the training new CSRs receive. Most of the training consists of 2 weeks of classroom instruction. In general, the instruction introduces CSRs to scripts and provides general information about the Medicare program. For example, in a training session we observed for newly hired CSRs in June 2004, the instructors helped the CSRs prepare for the types of inquiries that might be expected from callers on the 1-800-MEDICARE help line. The instructors posed different questions to the class, and each CSR attempted to identify and access a script with the right information to answer the instructor’s question. One CSR would be selected to read the script that they chose, and participants discussed whether they thought this was the script with the best information to answer the question. After completing their initial instruction, CMS requires the new CSRs to score at least 90 percent on a written exam and successfully complete a call handling simulation exercise before they answer calls on the help line without supervision. To successfully complete the call handling simulation exercise, CSRs must accurately answer two consecutive simulated help line calls out of six possible attempts. In addition, new CSRs generally spend about 4 hours listening to calls answered by an experienced CSR. In addition to the initial training for newly hired CSRs, CMS requires all CSRs to receive continuing training. Continuing training is delivered through three methods: refresher classes, online broadcast announcements, and small group meetings. Weekly refresher classes provide a means of instructing CSRs about Medicare program changes. Following each refresher training class, CSRs complete a short quiz to show that they understand the new information. While there is no minimum score that CSRs must achieve on the short quiz, CMS staff informed us that help line supervisors review each quiz to ensure that any questions that posed problems for CSRs would be addressed with further training. To provide CSRs with information quickly, the primary contractor sends online broadcast announcements to each CSR’s computer workstation. These online announcements usually contain information that may affect CSRs’ responses to help line questions, such as news about a change in a specific script. Lastly, small group meetings of about 12 CSRs and their supervisor are held for 30 minutes each week so that CSRs can discuss topics that can help them improve their call handling skills. After gaining experience in answering calls, some CSRs receive 4 additional days of special training and are promoted to a senior position. These CSRs receive classroom training on using Web-based computer programs that can assist Medicare beneficiaries in selecting a managed care plan, a nursing home, or other Medicare-related services. Like other CSRs, they must score 90 percent or higher on a written exam, and successfully complete a simulated call handling exercise before they can handle calls using the Web-based computer programs. Currently, about 200 senior CSRs answer calls on the 1-800-MEDICARE help line. Although all CSRs receive training and are tested as required, the responses we received indicate that not all CSRs had the necessary knowledge and skills to answer our questions accurately. In our opinion, testing how effectively CSRs use scripts to answer frequently asked questions provides the best measure of their preparation to do so. While 24 of the exam’s 52 questions ask CSRs to identify scripts that could be used to answer specific inquiries, the remaining 28 questions target other skills. In addition to the written test, new CSRs must appropriately answer questions posed in two consecutive simulated calls before they staff the help line. This simulated call handling and some of the written exam questions are the only measures of the CSRs’ ability to accurately answer calls using scripts. In combination, the test and the two simulated calls do not appear to be a sufficient measure of new CSRs’ ability to accurately answer the most frequently asked questions, given our findings on the accuracy of their responses. Further, while all CSRs receive continuing training, they are not required to demonstrate that they have effectively mastered the new material in handling calls. Developing a more targeted assessment of where CSRs need to augment their skills helped focus another help line’s training efforts and allowed it to meet its accuracy goals. In 2001, we assessed the telephone help line maintained by the IRS to answer taxpayers’ questions and found that it had not met the agency’s goals for accurately answering general questions about tax law and specific questions about individuals’ tax returns in 2001. In response, the IRS analyzed the specific types of inquiries within the area of tax law and individual returns that were answered inaccurately and identified the knowledge and skills its CSRs needed to answer questions more accurately. The IRS also identified the CSRs most in need of training to improve accuracy in those knowledge and skill areas and provided additional training to them before call volume increased for the 2002 tax season. By the end of the training period, these CSRs were required to be certified by their managers as capable of providing correct responses to taxpayer questions. The IRS also assigned responsibility for selected tax law topics to individual call center managers, making them accountable for ensuring that CSRs were trained and could accurately address inquiries on these topics. After these initiatives were complete, we found that the help line had improved its accuracy enough to meet its 2001 goals. In the span of 1 year, the accuracy rate on answering tax law questions increased from 79 to 85 percent and the accuracy rate for answering questions about individuals’ tax returns increased from 88 to 91 percent. CMS monitors the 1-800-MEDICARE help line mostly by requiring its primary contractor to evaluate four individual conversations that each CSR has with callers each month. Based on these conversations, the primary contractor evaluates the performance of individual CSRs in several categories, including accuracy, and reports the overall results to CMS. CMS also occasionally directly monitors a small number of individual CSRs’ calls. However, the contractor’s and CMS’s monitoring does not systematically track the accuracy rates for commonly asked questions. As a result, the monitoring does not assess how accurately CSRs as a group answer particular questions, which could help CMS target additional training efforts. Two smaller evaluation efforts did focus on specific questions answered inaccurately, and these targeted monitoring efforts provided information that CMS used to improve CSR training and the scripts used on the help line. At the time of our review, CMS had delegated most of the responsibility for monitoring the accuracy of the 1-800-MEDICARE help line to the primary contractor, while maintaining oversight by reviewing the primary contractor’s results. To monitor 1-800-MEDICARE, the primary contractor focuses on the performance of individual CSRs, evaluating four calls per month for each person. The primary contractor evaluates either live conversations—known as blind monitoring—or recorded conversations on the help line, while viewing displays of the CSRs’ computer activity during calls. Viewing the CSRs’ computer activity allows the primary contractor’s staff to observe the scripts or other materials that CSRs access to answer callers’ questions. After monitoring a call, the primary contractor’s supervisory staff uses a checklist to evaluate the CSR’s response to the caller. Help line supervisors share the results with each CSR to help improve performance. The primary contractor provides monthly reports to CMS on the results of its monitoring. The primary contractor has a subcontractor, which is responsible for conducting some independent call monitoring, as well as reviewing the results of some of the primary contractor’s call monitoring. In addition to the four calls per CSR per month that CMS requires the primary contractor to monitor, the subcontractor monitors up to one additional call per month per CSR. The subcontractor reports its monitoring results monthly to CMS and the primary contractor. In addition to meeting CMS requirements, the amount of call monitoring per CSR approximates industry standards. A survey of 735 North American call centers that represent help lines in various industries, including telecommunications, financial services, and health care, found a wide variance in the number of calls monitored per month. The most commonly reported monthly monitoring frequencies were 4 to 5 calls per CSR or 10 or more calls per CSR. The evaluation checklist used by CMS’s contractor for monitoring calls indicates that a CSR’s performance should include certain components— such as using an appropriate greeting, showing respect to the caller, actively listening to the caller, responding accurately to the question, providing a complete response, using appropriate tone and speed, offering to provide additional information if necessary, and ending the call politely. The primary contractor’s staff uses the checklist to evaluate both the customer service skills and knowledge skills demonstrated during a call and classify these into one of four categories—“unacceptable,” “needs improvement,” “achieves expectations,” and “exceeds expectations.” CMS requires the primary contractor to reach a quality rating of “achieves expectations” or higher for at least 90 percent of the total number of CSR calls evaluated each month. The primary contractor evaluates a call as either “accurate” or “inaccurate,” and because accuracy is weighted more heavily than other components, a CSR cannot provide inaccurate information on a call and still have the call scored as “achieves expectations.” However, in contrast to our methodology for this report, a CSR can provide incomplete information—information that is correct but does not fully answer the question—and still have the call scored as “achieves expectations.” In addition, the evaluation checklist does not indicate the specific criteria used to determine a call’s accuracy. Although CMS’s main role in monitoring the help line is to review the efforts of the primary contractor, the agency also conducts some monitoring of CSRs on its own. Like the primary contractor, CMS occasionally uses blind monitoring to evaluate the performance of individual CSRs—listening to real-time calls and watching the scripts and other materials the CSRs use. CMS does not conduct blind monitoring routinely and document the results, and therefore, CMS staff could not provide us any information on the extent of this monitoring. According to a CMS official, the agency conducts blind monitoring on a “limited and as- needed” basis. Although CMS’s primary 1-800-MEDICARE contractor monitors the accuracy of individual CSRs, CMS does not use the regular monthly monitoring to identify trends in inaccurate responses by question. Specifically, the primary contractor does not routinely classify or categorize CSRs’ answers by specific question to identify the questions that collectively were answered less accurately. While routine information about a question’s accuracy rate could be used to target improvement efforts, CMS has only taken this approach twice in recent years. Both of these efforts were small compared to the primary contractor’s monitoring. The larger effort to monitor accuracy by question lasted 29 months and involved 300 calls a month, whereas the primary contractor evaluated about 7,350 calls in July 2004. CMS contracted for a study to evaluate the 1-800-MEDICARE help line’s accuracy in answering specific questions, but did not receive results quickly enough to immediately address problems. Beginning in January 2002 and until May 2004, the CMS contractor hired to assess the “Medicare & You” program placed about 300 calls per month to the 1-800- MEDICARE help line. These callers used a set of hypothetical scenarios to assess how specific questions were answered. This study also established criteria specifying the information an accurate answer should provide and made a distinction between fully responsive answers—in other words, complete and accurate answers—or partially responsive answers—not complete but providing some accurate information. For the first 19 months studied, the average percentage of calls that received fully responsive answers ranged from under 40 percent to over 90 percent, depending on the question and the period of time studied. The study helped CMS identify questions that the CSRs were answering less accurately. However, CMS staff told us that the agency received the reports 4 to 5 months after monitoring occurred, which did not allow CMS to immediately address any identified problems. Nevertheless, CMS staff indicated to us that the results of these evaluations were used to identify areas where CSR refresher training was needed. Due to funding constraints, this project was terminated in May 2004. CMS told us it planned to resume a similar project in November 2004 around the time when the next cycle for the “Medicare & You” program contract begins. In another study, CMS measured 1-800-MEDICARE’s accuracy by question rather than individual CSR and found that certain questions were answered more accurately than others. In April 2004, a private consultant group that was under contract with CMS placed 49 calls to 1-800- MEDICARE to determine whether CSRs were relaying accurate information about Medicare’s new prescription drug discount card and benefit. The study established specific criteria on the information that CSRs should include for an answer to be accurate. Evaluating accuracy by question, the study found that CSRs accurately answered between 39 percent and 69 percent of the questions asked about the new Medicare prescription drug discount card and benefit. In contrast, in that same month the subcontractor—using its evaluation checklist—determined that the overall accuracy rate for the calls it measured on all topics to be about 94 percent. As a result of this study, CMS improved its scripts and related training. The private consultant’s report indicated that CSRs were having difficulty distinguishing between the Medicare prescription drug coverage benefit that will be in effect in 2006 with the Medicare prescription drug discount card that is currently available. CMS responded by clarifying the scripts used to answer these questions and improving the related materials used to train CSRs. For example, CMS worked with the contractor to rename the titles of the different scripts to include the term “benefit” or “card” as a method of differentiating between them. When the subcontractor noted about 3 months later that a few CSRs were continuing to confuse the prescription drug discount card and prescription drug benefit, CMS further clarified the scripts and its primary contractor conducted additional refresher training to attempt to correct the problem. Each year, millions of Medicare beneficiaries, their family members, and other callers rely on the 1-800-MEDICARE help line for information. Providing them with accurate answers is critical to keeping them informed about Medicare’s benefits. However, we found that 6 out of 10 calls were answered accurately, 3 out of 10 calls were answered inaccurately, and we were not able to get a response for 1 out of 10 calls. To answer inquiries accurately, CSRs have to be able to effectively access and use scripts. Given the lack of prior Medicare knowledge among CSRs, the 1-800-MEDICARE help line’s script-based approach is a reasonable means to facilitate accurate and consistent responses to caller’s questions. However, this approach makes CSRs—and thus the help line they support—dependent on the clarity and accuracy of the scripts available. Pretesting scripts might have identified ones that were difficult to understand by either the CSR or by potential callers, but this is not routinely done. Further, the training that CSRs receive on using scripts is also essential to their ability to answer questions accurately. However, the written exam that newly hired CSRs must pass and the continuing training quizzes do not measure the ability to use information in scripts to provide accurate and complete answers on the help line. Monitoring the help line could identify areas where CSRs’ knowledge and skills are lacking. Although CMS ensures that the amount of the contractor’s monitoring per CSR falls within industry standards, the bulk of the monitoring methods are not designed to systematically assess how accurately CSRs as a group answer particular questions. Evaluating how accurately particular questions are addressed is an important step to improving scripts and CSR training for those topics. Finally, 1-800-MEDICARE is advertised as providing information 24 hours a day, 7 days a week, but we could not always obtain answers to our questions when we called. When we called with questions about Medicare payment for power wheelchairs and coverage of eye exams and glasses, the help line frequently transferred our calls to claims administration contractors that were closed at the time. For a third of these transferred calls, we were not given a call-back number. This practice of transferring calls to claims administration contractors that are closed, in effect, reduces the benefit of a 24-hour help line to a business-hour help line for many beneficiaries. In order to improve the accuracy of responses made on the 1-800- MEDICARE help line and callers’ ability to have their questions answered, we recommend that the CMS Administrator take four actions: Assess the current scripts for the most commonly asked questions to ensure that they are understandable to CSRs and potential callers and if not, revise them as needed and pretest new and revised scripts to ensure that CSRs can effectively use them to accurately answer callers’ questions. Enhance testing of CSRs’ skills in accurately answering the most commonly asked questions using scripts and, if needed, provide additional training to improve the accuracy and completeness of their responses. Supplement current monitoring efforts to include a systematic review of the accuracy of information provided by the CSRs as a group for the most frequently asked questions and use the results to modify scripts or provide more training, as needed. Revise routing procedures and technology to ensure that calls are not transferred or referred to claims administration contractors during nonbusiness hours. In its written comments on a draft of this report, which are reprinted in appendix VIII, CMS agreed with our recommendations and stated that it had begun several efforts to address them. CMS also provided more detail on the challenges it faced in administering 1-800-MEDICARE due to the massive increase in call volume that occurred after the passage of the MMA. CMS agreed with our recommendation to assess current scripts and pretest new and revised scripts to ensure that they are understandable. In its comments, CMS stated that the written information used to develop 1-800-MEDICARE scripts often comes from Medicare publications that have been consumer tested as part of the publication preparation process. Language that has undergone some consumer testing is often incorporated into the scripts to improve clarity. While this step may be helpful, we believe that pretesting scripts verbally is also important, as consumer testing of material intended for written publication may not be adequate to determine whether the scripts are understandable to CSRs and the public. CMS also stated that it is considering implementing an editorial board to review scripts, which we believe would be another positive step to help assure the scripts’ clarity. CMS agreed with our recommendation to enhance testing of CSRs’ ability to accurately answer questions and provide additional training, as needed. CMS indicated that it was reassessing its testing requirements to determine better ways to ensure that CSRs are prepared to handle calls, once they are certified. The agency stated that it planned to benchmark its efforts against industry standards to determine more effective approaches. In its comments, CMS expressed concern that we had characterized customer service skills as less meaningful than knowledge skills. While we believe both are important, in keeping with our congressional mandate, this report focused on the accuracy of information provided by 1-800- MEDICARE and did not address the quality of its CSRs’ customer service skills. CMS agreed with our recommendation to supplement its current monitoring efforts by including a systematic review of the accuracy of information provided by CSRs as a group for the most frequently asked questions and using the results to modify scripts and provide more training, as needed. CMS indicated that it believed it had done a good job developing a quality assurance program that focuses on the most important requirements for both accuracy and customer service skills needed to answer calls from the elderly population. We agree that CMS has focused on quality assurance for 1-800-MEDICARE. Our recommendation did not deal with changing, but with enhancing, its quality assurance efforts. To address our recommendation, CMS indicated that it would implement a plan to develop trend information on the results of its quality assurance activities and would focus on improving the accuracy of responses to frequently asked questions. CMS agreed with our recommendation to revise procedures so that calls are not transferred to claims administration contractors during their nonbusiness hours. The agency indicated that although the 1-800- MEDICARE CSRs are available 24 hours a day, 7 days a week, they do not have access to claim-specific information. Therefore, the 1-800- MEDICARE CSRs would have to direct callers asking questions about specific claims to contact the claims administration contractors during their normal business hours. Our report focused on questions for which the CSRs had scripted responses and did not need to access claim-specific information. Nevertheless, for 7 of the 21 calls that were routed to claims administration contractors that were closed at the time we called, the contractors’ recorded messages did not provide a telephone number to call back during stated business hours. In its comments, CMS indicated that it had implemented additional routing plans to address this problem and is expanding access to claims data that will help reduce this problem in the future. CMS also raised concerns that we did not release the detailed audit documentation on our test calls while our work was still in progress. GAO’s policy does not allow us to provide audit documentation to an agency while work is ongoing. At the time of CMS’s request, we described our policy and offered to verbally provide more detail on telephone disconnections, but CMS did not follow up with us to obtain this information. After our report is published, we will address this request. Finally, CMS expressed concern that we did not describe the criteria we used to evaluate the accuracy of responses to our six questions and stated that incomplete answers should not be considered inaccurate responses. As noted in the draft report, table 1 lists the criteria that we established for each of our six questions. We developed these criteria so that we could objectively evaluate responses received from CSRs. For four of these questions (numbers 1, 2, 3, and 6), there was only one element in the correct response, so an incomplete response was not possible. For the remaining two questions, we considered the answer accurate if it included two elements. We believe that by not including both elements for each of these questions, callers would be left with a false impression, rather than with an accurate answer. For example, in evaluating the response to the question of whether Medicare would pay for a power wheelchair, we thought it was important for the caller to know that (1) the wheelchair needed to be prescribed by a physician and (2) the beneficiary would be responsible for a copayment. Because the copayment for a power wheelchair is at least $1,000, we believe that it would be misleading not to mention either a copayment or cost sharing when a caller asks whether Medicare pays for this item. Likewise, needing to have a physician prescribe the power wheelchair is a Medicare requirement, and we did not think a response could be accurate without mentioning it. For the question on Medicare part B enrollment, we thought that it was important to know that a beneficiary could wait to enroll but, once other health insurance coverage ended, had a limited time period to enroll in part B without incurring higher premiums. Without knowing both elements of this answer, beneficiaries would not have enough information to guide their decision on part B enrollment and, therefore, the answer provided would be misleading. We are sending copies of this report to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me on (312) 220-7600. An additional GAO contact and other staff who made contributions to this report are listed in appendix IX. To determine the accuracy of information provided, we placed a total of 420 calls to the 1-800-MEDICARE help line. We made 420 calls in order to have a sample that was large enough to determine if differences in accuracy were significant. We selected six questions about Medicare— three related to the Medicare prescription drug discount card and three related to Medicare coverage or eligibility for benefits. We asked each of the selected questions a total of 70 times. We randomly placed calls at different times of the day and different days of the week between July 8 and July 30, 2004, to match the daily and hourly pattern of calls reported by 1-800-MEDICARE in April 2004. To select the 6 questions, we initially chose 20 questions that related to the 100 topics most frequently addressed by the 1-800-MEDICARE help line’s CSRs in May 2004 and developed criteria for an accurate response from information on the Medicare Web site’s frequently asked questions section. We then presented the 20 questions and answers to Centers for Medicare & Medicaid Services (CMS) officials, who provided us with a script number and text for each question. CMS officials did not object to using any of the 6 questions that we ultimately chose, or suggest that the answers that we had provided for these questions were incorrect. We informed both CMS and one of the 1-800-MEDICARE contractors that we would be placing these calls. However, we did not tell them which 6 of the 20 questions we selected, or the specific dates and times when we would be placing our calls. Before placing our calls, we created a scenario with fictional names and zip codes for each of the six questions to make them sound more realistic. (Appendices II to VII contain the scenarios that we used.) We made pretest calls for each question before we finalized its wording. During our actual calls, the CSRs were not aware that their responses would be included in a research study. We recorded the length of, and routing process for, each call. We evaluated the accuracy of the responses by CSRs to the 420 calls we placed by whether key information was provided. The results from our 420 calls are limited only to those calls and are not generalizable to the population of calls routinely made to call centers by beneficiaries or other callers. Although the six questions we posed were among topics most often accessed by CSRs, they do not encompass all of the questions callers might ask. In addition, we did not verify the reliability of CMS’s monitoring data. To examine the training provided to 1-800-MEDICARE CSRs, we interviewed officials representing CMS and the 1-800-MEDICARE contractor responsible for training CSRs. We reviewed the primary contractor’s training requirements and the instructional materials that are used to educate new CSRs. We also observed a training session for new CSRs at a 1-800-MEDICARE call center. In addition, we reviewed previous GAO reports on the operations of other help lines, including the training provided to the CSRs answering calls on the Internal Revenue Service’s help line. To evaluate CMS’s role in overseeing the accuracy of information provided through the 1-800-MEDICARE help line, we interviewed officials from CMS and one of the two 1-800-MEDICARE contractors about their monitoring and oversight activities. For our first objective, we focused on the accuracy of information provided by the CSR regardless of the contractor who managed their work. For our other two objectives, we relied on information provided by one of the two contractors—which we refer to as the primary contractor. We also identified CMS requirements for call center operations and reviewed contractor reports to identify the types of problems encountered through the help line. We performed our work from May 2004 through December 2004 in accordance with generally accepted government auditing standards. For question 1, which is about choosing a prescription drug discount card, we used scenarios with different combinations of prescription drugs and one of four different locations, in order to ensure our anonymity. If a CSR named one of the two prescription drug discount cards with the lowest cost for the combination of prescription drugs in the scenario posed, we considered it to be a correct response to our question. To ensure that we obtained the correct answer for each question, we periodically checked the prescription drug prices using the prescription drug tool on the Medicare.gov site. This is the same tool CSRs used to answer our questions. The answers shown in this appendix were accurate as of July 15, 2004. Question 1a posed to CSRs: My father-in-law lives in Wayne, Pennsylvania, and wants to continue to shop at Yorke Apothecary (located at 110 S. Wayne Ave., Wayne, Pennsylvania, 19087). What drug card can he get that will cover all of his drugs at Yorke Apothecary, and cost the least amount? He takes the following drugs: Other information to provide to the CSR if asked: He is single. He lives in Wayne, Pennsylvania, in Delaware County. His zip code is 19087. He currently has fee-for-service Medicare with no other drug benefits. He does not use an American Indian Health pharmacy. He does not live in a long-term care facility. He has $20,000 in annual income and is not interested in any drug assistance programs, including the $600 credit. His sources of income are a pension and Social Security, but the amount from each is unknown. He has some bank accounts, but their value is unknown. The amount he currently pays for drugs is unknown. Default answer for other questions: “I don’t know.” Two prescription drug discount cards listed on Medicare.gov with the lowest prices for the combination of drugs in our scenario: myPharmaCare Prescription Drug Discount Card 1-800-601-3002 Monthly drug costs: $116.69 Annual enrollment fee: $25.00 U Share Prescription Drug Discount Card 1-800-707-3914 Monthly drug costs: $115.59 Annual enrollment fee: $19.95 Question 1b posed to CSRs: My father lives in Homewood, Illinois, and wants to continue to shop at the K-Mart Pharmacy in Homewood, Illinois (located at 17550 Halsted Rd., Homewood, Illinois, 60430). What drug card can he get that will cover all of his drugs at the K-Mart Pharmacy, and cost the least amount? He takes the following drugs: Other information to provide to the CSR if asked: He is single. He lives in Homewood, Illinois, in Cook County. His zip code is 60430. He currently has fee-for-service Medicare with no other drug benefits. He does not use an American Indian Health pharmacy. He does not live in a long-term care facility. He has $20,000 in annual income and is not interested in any drug assistance programs, including the $600 credit. His sources of income are a pension and Social Security, but the amount from each is unknown. He has some bank accounts, but their value is unknown. The amount he currently pays for drugs is unknown. Default answer for other questions: “I don’t know.” Two prescription drug discount cards listed on Medicare.gov with the lowest prices for the combination of drugs in our scenario: U Share Prescription Drug Discount Card 1-800-707-3914 Monthly drug costs: $174.91 Annual enrollment fee: $19.95 Any of several prescription drug discount cards available with this combination of drugs priced at $182.80. Question 1c posed to CSRs: My father lives in Cincinnati, Ohio, and wants to continue to shop at the CVS Pharmacy (located at 3195 Linwood Ave., Cincinnati, Ohio). What drug card can he get that will cover all of his drugs at the CVS Pharmacy, and cost the least amount? He takes the following drugs: Other information to provide to the CSR if asked: He is single. He lives in Cincinnati, Ohio, in Hamilton County. His zip code is 45226. He currently has fee-for-service Medicare with no other drug benefits. He does not use an American Indian Health pharmacy. He does not live in a long-term care facility. He has $20,000 in annual income and is not interested in any drug assistance programs, including the $600 credit. His sources of income are a pension and Social Security, but the amount from each is unknown. He has some bank accounts, but their value is unknown. The amount he currently pays for drugs is unknown. Default answer for other questions: “I don’t know.” Two prescription drug discount cards listed on Medicare.gov with the lowest prices for the combination of drugs in our scenario: myPharmaCare Prescription Drug Discount Card 1-800-601-3002 Monthly drug costs: $202.79 Annual enrollment fee: $25.00 Anthem Prescription Drug Discount Card 1-800-730-2804 Monthly drug costs: $209.87 Annual enrollment fee: $14.95 Question 1d posed to CSRs: My father lives in Brooklyn, New York, and wants to continue to shop at the Neergaard Pharmacy (located at 454 Fifth Avenue, in Brooklyn, New York). What drug card can he get that will cover all of his drugs at the Neergaard Pharmacy, and cost the least amount? He takes the following drugs: Other information to provide to the CSR if asked: He is single. He lives in Brooklyn, New York, in Kings County. His zip code is 11215. He currently has fee-for-service Medicare with no other drug benefits. He does not use an American Indian Health pharmacy. He does not live in a long-term care facility. He has $20,000 in annual income and is not interested in any drug assistance programs, including the $600 credit. His sources of income are a pension and Social Security, but the amount from each is unknown. He has some bank accounts, but their value is unknown. The amount he currently pays for drugs is unknown. Default answer for other questions: “I don’t know.” Two prescription drug discount cards listed on Medicare.gov with the lowest prices for the combination of drugs in our scenario: EnvisionRx Plus Prescription Drug Discount Card 1-866-250-2005 Monthly drug costs: $46.33 Annual enrollment fee: $30.00 Any of several prescription drug discount cards available with this combination of drugs priced at $50.45. Question 2 posed to CSRs: I’ve heard about the $600 credit that can help pay for prescriptions and wanted to know if my mother was eligible for it. Could she qualify for the credit? I know she has three sources of income. She has about $765 per month from Social Security. She also gets $250 each month in rental income from the apartment in the downstairs part of her house. She has a tenant that pays rent to her. She’s also getting a payout from my father’s life insurance policy of $70 each month. Other information to provide to the CSR if asked: She is single and lives alone. She only has fee-for-service Medicare as health insurance. She owns her house. She lives in Miami, Florida, 33129. Default answer for other questions: “I don’t know.” Information from Medicare.gov that GAO used to develop accuracy criteria: If your annual gross income is currently no more than $12,569 ($1,048 per month) as a single person or no more than $16,862 ($1,406 per month) for a married couple, you might qualify for a $600 credit to help pay for your prescription drugs and Medicare may pay your annual enrollment fee. If you and your spouse both qualify for the credit, the credit will be put on each of your cards. TRICARE for Life provides secondary military health coverage available for Medicare- eligible uniformed services beneficiaries, their eligible family members, and survivors enrolled in Medicare part B. The following sources of income should be included when calculating your gross income for your $600 credit enrollment form: Employee compensation (salary, wages, tips, bonuses, awards, etc.) Unemployment compensation Pensions and annuities Social Security benefits (including Social Security Equivalent portion of Railroad Retirement) Railroad Retirement benefits Veterans Affairs benefits Military and government disability pensions – armed forces, Public Health Service, National Oceanic and Atmospheric Administration, Foreign Service (based on date pension began, combat-related pension, etc.) Individual Retirement Account distributions Interest (savings accounts, checking accounts, etc.) Ordinary dividends (stocks, bonds, etc.) Refunds, credits, or offsets of state and local income taxes Alimony received Business income Capital gains Farm income Rental real estate, royalties, partnerships, trusts, etc. Other gains (sale or exchange of business property) Other income (lottery winnings, awards, prizes, raffles, etc.) The following sources of income should not be included when calculating your income for $600 credit enrollment form: Inheritances and gifts (taxed to estate or giver if not under limits for exemption) Interest on state and local government obligations (e.g., bonds) Workers compensation payments Federal Employees Compensation Act payments Supplemental Security Income benefits Income from national senior service corps programs Public welfare and other public assistance benefits Proceeds from sale of a home Lump sum life insurance benefits paid upon death of insured Life insurance benefits paid in installments Accelerated life insurance death benefit payments (e.g., viatical settlements, terminal illness, chronic illness) Question 3 posed to CSRs: I’m calling with a question about my grandmother. She is 69 and she has Medicare, and she also has a Medigap policy. Could you please tell me if she can still get a Medicare-approved drug discount card? Other information to provide to the CSR if asked: She is single and lives alone. She lives in Miami, Florida. I don’t know the zip code off-hand. She is not in a long-term care facility. She is enrolled only in Medicare fee-for-service. She doesn’t have a Medicare managed care plan. She is not enrolled in Medicaid. I don’t think she’s interested in the $600 credit right now; I was just wondering if she could get the prescription drug discount card. Default answer for other questions: “I don’t know.” Information from Medicare.gov that GAO used to develop accuracy criteria: Having a Medigap policy does not preclude a Medicare beneficiary from being eligible for a Medicare prescription drug discount card. For question 4 about power wheelchairs, we provided the CSRs with one of four different city and state combinations, as shown below. The four city and state combinations were randomly assigned to different power wheelchair calls. We did this to ensure that if our call was transferred to one of the four claims administrator contractors that administer Medicare’s durable medical equipment claims—including power wheelchair claims—we were not biasing our results toward any particular claims administrator. Question 4 posed to CSRs: My father is having trouble getting around. He has a hard time walking and doesn’t have much upper body strength. Could you please tell me if Medicare will pay for a power wheelchair for him? Other information to provide to the CSR if asked: He is enrolled in Medicare, both parts A and B. He lives in : Philadelphia, Pennsylvania. His zip code is 19105. Detroit, Michigan. His zip code is 48209. Pensacola, Florida. His zip code is 32516. Scottsdale, Arizona. His zip code is 85262. His doctor has suggested he get a power wheelchair to improve his mobility. He doesn’t have enough strength to use a manual wheelchair. He lives alone and is not married. Default answer for other questions: “I don’t know.” Information from Medicare.gov that GAO used to develop accuracy criteria: Power wheelchairs and/or scooters are covered if they are medically necessary based on Medicare’s criteria for coverage. In order for Medicare to cover a power wheelchair/scooter, the beneficiary’s doctor must provide a prescription or certificate of medical necessity that states that he needs it because of his medical condition. If your father qualifies for coverage, Medicare will pay 80 percent of the Medicare-allowed amount. Question 5 posed to CSRs: Should my husband sign up for part B if I am still working and we have health insurance coverage from my employer? Other information to provide to the CSR if asked: My husband is about to turn 65 next January. If asked whether working for a large or small employer: I work for the federal government. I have full medical coverage, including dental and vision. My husband is fully covered under my insurance plan. Neither of us is disabled. The city/zip code information that corresponds with the location of the caller. Default answer for other questions: “I don’t know.” Information from Medicare.gov that GAO used to develop accuracy criteria: Your husband might want to wait to sign up for part B, because he would have to pay the monthly part B premium and the benefits may be of limited value as long as the group health plan is the primary payer. You could save on monthly premiums by waiting to sign up. If your husband doesn’t sign up for part B when first eligible because he has group health coverage through an employer, he can sign up for Medicare part B during a special enrollment period. This can be anytime he is still covered by the employer’s group health plan or during the 8 months following the month when either the coverage or the employment ends—whichever is first. Most people who sign up for Medicare part B benefits during a special enrollment period do not pay higher premiums. Question 6 posed to CSRs: My mother is 66 and is enrolled in Medicare. She has been complaining lately that she is having trouble reading the paper and thinks she may need new eyeglasses. Will Medicare pay for an eye exam and a new pair of eyeglasses if her prescription has changed? Other information to provide to the CSR if asked: The city/zip code information that corresponds with the location of the caller. She is not married. She is enrolled in Medicare fee-for-service only. I do not know the name of the county she lives in. Default answer for other questions: “I don’t know.” Information from Medicare.gov that GAO used to develop accuracy criteria: Medicare does not pay for routine eye exams, eyeglasses, or contact lenses. The beneficiary must pay 100 percent of these services. Shaunessye D. Curry, Joy L. Kraybill, Krister P. Friday, Sari B. Shuman, Mary W. Reich, Ramsey L. Asaly, Alexis Chaudron, Perry G. Parsons, and Leslie Spangler made key contributions to this report.
In March 1999, the Centers for Medicare & Medicaid Services (CMS) implemented a telephone help line--1-800-MEDICARE--to provide information about program eligibility, enrollment, and benefits. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) directed GAO to examine several issues related to this 24-hour help line and the customer service representatives (CSRs) who staff it. In this report, GAO evaluated (1) the accuracy of the information the help line provides, (2) the training given to CSRs, and (3) CMS's efforts to monitor the accuracy of information provided through the help line. The 1-800-MEDICARE help line provided accurate answers to 61 percent of the 420 calls we made and inaccurate answers to 29 percent. We were not able to obtain any answers for the remaining 10 percent of our calls at the time we placed them. Most of these calls were not answered because they were transferred to other contractors responsible for processing Medicare claims that were not open for business at the time we called or these calls were inadvertently disconnected. To facilitate accurate responses, the 1-800-MEDICARE help line provides CSRs with written answers--called "scripts"--that CSRs use during a call. When CSRs provided inaccurate information, it was largely because they did not seem to access and effectively use a script that answered our questions. CMS and its contractor do not routinely pretest the scripts to ensure that they are understandable to CSRs or potential callers. The training for CSRs meets CMS's requirements, but it is not sufficient to ensure that CSRs are able to answer questions accurately on the help line. Before handling calls, CSRs must complete about 2 weeks of classroom training; accurately answer two simulated calls consecutively out of six; and score at least 90 percent on a written exam. In addition, all CSRs receive ongoing training. However, the results from our calls indicate that the testing and simulated call answering did not sufficiently measure whether CSRs were prepared to answer questions accurately. CMS delegates most accuracy monitoring to one of its contractors and reviews the results. The bulk of the monitoring focuses on how accurately individual CSRs answer questions. However, this monitoring does not systematically track questions answered inaccurately by CSRs as a group, which could help target training and script improvement. Through two smaller studies that measured how accurately specific questions were answered, CMS was able to identify areas to improve scripts and training.
Projects funded under the military construction appropriation generally cost over $300,000 and produce complete and usable new facilities or improvements to existing facilities. The Army Corps of Engineers and the Naval Facilities Engineering Command manage the design of all service construction projects; each service verifies that the project designs are at least 35 percent complete when submitted to Congress for funding. Congress appropriates 5-year funds for construction projects. The Office of the Secretary of Defense issues planning guidance to identify, prioritize, and fund construction projects. The military services and the Defense Logistics Agency (DLA) justify selected construction projects based on the need to comply with environmental laws and regulations. Although environmental military construction projects compete with other military construction projects for funding, DOD gives additional priority to those environmental projects that are to correct problems that do or will soon result in noncompliance with the requirements. Between fiscal years 1994 and 1996, DOD will have funded $689 million in environmental compliance construction projects. Figure 1 shows the funding and the types of construction projects executed during that time, and appendix I provides details on projects and their costs for the services, including the Air National Guard and the Air Force Reserve. In November 1993, we reported that the services’ processes for identifying, classifying, and funding environmental compliance projects varied. We stated that more consistent processes would help ensure that needs and costs were identified and ranked so that DOD and Congress could oversee trade-offs in funding and minimize inequities among the services’ projects. We recommended that DOD guidance specify how the services should report costs related to environmental compliance construction and determine which appropriation would provide funds. The services have taken initiatives to improve their programming and oversight of environmental construction projects. The Army is moving toward more centralization in the management of its military construction priorities to promote oversight of construction-related environmental issues on an Army-wide basis. The Air Force now requires its commands to prioritize and consolidate environmental compliance construction projects with other military construction projects, and has instituted an integrated process team at the Air Staff level to review military construction requirements during the budgeting and programming process. The Marine Corps is updating its environmental compliance tracking system to more easily identify environmental compliance and other environmental projects, and the Navy created a single-source headquarters sponsor for construction projects. In addition, the Naval Audit Service annually reviews the Navy’s and the Marine Corps’ proposed military construction projects. At the installation level, each of the services has formed working groups and committees to work with Environmental Protection Agency (EPA) and state and local representatives to better identify project requirements. Despite these actions, DOD still has not issued specific guidance on how the services should program and report costs related to environmental compliance construction projects and how they should determine which appropriations should be used to fund the projects. Consequently, the services continue to inconsistently program and report environmental compliance construction projects. One inconsistency is the manner in which the services justify projects that are to be funded within the military construction appropriation. In fiscal years 1994 and 1995, the Air Force funded about $10 million for hydrant fuel systems improvements with environmental compliance as justification for priority. Hydrant fuel systems consist of pressurized underground piping used to fuel various-sized aircraft. A 1995 Kelly Air Force Base, Texas, project was funded to comply with a state enforcement order to install leak detection and prevention equipment. On the other hand, DLA justifies its hydrant fueling systems based on mission-related requirements, but notes that the systems have environmental compliance aspects. DLA plans to spend $48 million in fiscal year 1996 military construction funds for these systems and $75 million in fiscal year 1997 funds for similar projects. This inconsistency may be minimized in the future because DLA’s Defense Fuel Supply Center is now responsible for sponsoring all fuel-related military construction. The Navy classified the construction of a Patuxent River, Maryland, hazardous material storehouse as an environmental compliance project and spent $3 million in fiscal year 1994 for the facility. Such storehouses are generally required for the safe storage and efficient processing of hazardous materials used by base and tenant activities. Under Air Force policy similar projects should not be funded as environmental compliance projects. The Army’s hazardous material storage projects, as we discussed in our 1993 report, are managed by its logistics experts rather than by environmental engineers who manage most environmental functions and are not justified or prioritized as compliance projects. The services justify as mission-related other projects that must comply with regulatory requirements. For example, the Marine Corps is requesting $13 million in fiscal year 1997 military construction funds for the construction of a mission-related corrosion control facility at New River, North Carolina. Such facilities are constructed to allow functional and environmentally safe paint stripping and application to control corrosion on various aircraft. The Marine Corps is constructing the facility to reduce air pollution and provide work areas that comply with requirements of the Clean Air Act and Occupational Safety and Health regulations. A Marine Corps official told us the project could be justified as either mission-related or environmental compliance. Another official told us that safety is the driving factor. Supporting documentation for the project shows both safety and environmental compliance requirements. The Air Force is funding similar projects as either environmental or mission-related. The Air Force was appropriated military construction funds for a fiscal year 1996 corrosion control facility at Davis-Monthan Air Force Base, Arizona, which it justified as environmental compliance. At Tinker Air Force Base, Oklahoma, a similar project is being requested as mission-related, although supporting documentation indicates the project is also required to comply with regulatory requirements. Tinker officials had proposed the project to be justified as environmental compliance to meet Clean Air Act requirements, but Air Force Materiel Command officials believed the existing facility could be modified to meet emissions requirements, and that the project was justified based on Tinker’s large paint workload. In discussing this issue, Air Force officials emphasized that while the project had environmental compliance aspects, the increased stripping and painting requirements drove the need to classify the project as mission-related. Another inconsistency among the services involves how the projects are designed, which in turn affects whether projects are funded with military construction funds or from the operations and maintenance appropriation. In this regard, while large projects are funded from the military construction appropriation, smaller scope minor construction (less than $300,000) projects can be funded with operation and maintenance funds or with minor construction funds that are managed by the installation. We found that the services sometimes design seemingly similar projects differently, resulting in different prioritization and funding of the projects. The Air Force obligated over $47 million in fiscal years 1994 and 1995 military construction funds for 34 underground fuel storage tank projects.Environmentally safe storage tanks are required to ensure continued operating storage of petroleum products and other environmentally controlled substances used to support the operation of such things as depot and base shops, electric generators, and gas stations. Air Force installations bundled together a number of individual tank projects to create single projects that would meet the $300,000 minimum for construction funding. For example, Tinker Air Force Base alone bundled together 78 individual tank upgrades to create a single construction project. During fiscal years 1994 and 1995, the Army obligated $80 million in operation and maintenance funds to upgrade and construct underground storage tanks similar to those of the Air Force to comply with environmental laws and regulations. For example, Fort Bliss obligated $1.4 million in fiscal year 1995 operation and maintenance funds to replace a number of underground storage tanks; it plans to spend $1.2 million in fiscal year 1996 operation and maintenance funds to replace and upgrade additional tanks. The Army plans to spend an additional $61 million in fiscal year 1996 operation and maintenance funds and $47 million in fiscal year 1997 operation and maintenance funds for the construction of tanks. We also found another example of project design and funding flexibility at Tinker Air Force Base. The Air Force eliminated a fiscal year 1996 storm drainage project at Tinker from its environmental compliance construction estimate. Officials determined the project would not receive a high enough priority if funded with military construction funds. Instead, Tinker officials told us they plan to divide the project into smaller units and fund them from the operation and maintenance appropriation. Services also fund projects in phases using the same appropriation. Officials believe this funding method helps ensure the funding of costlier projects. Such funding methods can minimize the apparent total cost of the project when supporting documentation for each phase does not identify the total project cost. The Marine Corps is funding a $77-million military construction wastewater treatment plant upgrade at Camp LeJeune, North Carolina, in three distinct phases in fiscal years 1994, 1996, and 1997. Officials stated they selected this funding method because they believed the project would more likely receive funding if it was submitted in complete and usable increments, rather than as a total package. The Marine Corps could not afford to fund such a large project in a single year because of fiscal constraints. Supporting budget documentation submitted to Congress identified each phase of the wastewater project but did not include the total cost of all project phases. The Navy is funding a $24-million military construction oily waste collection system at the Norfolk Naval Station, Virginia, in two distinct phases beginning with fiscal year 1996. The project is being constructed under a consent agreement with the local community. The Navy requires $12.2 million in fiscal year 1996 funds and is planning to request an additional $11.5 million in future year funds. Officials at the Naval Facilities Engineering Command, Atlantic Division, told us phase II of the fiscal year 1997 project has been delayed, and is currently being considered for fiscal year 1998. Officials are considering the impact of other related projects, such as the installation of oil/water separators on aircraft carriers. Supporting budget documentation submitted to Congress identified phases but not total project costs for all phases. These inconsistencies and funding practices have continued to occur because DOD has not clarified its guidance to provide better definitions for the classification and prioritization of compliance projects. Stating the need for more consistency, DOD officials, as part of a 1995 environmental quality initiative, have issued fiscal years 1998-2003 annual programming guidance that is designed to better identify compliance costs. Officials believe the guidance will capture recurring costs associated with managing environmental programs such as manpower, training, and maintenance of environmental equipment. However, the guidance does not specify how the services will program and report compliance costs. Also, the guidance merges into one category projects that address existing noncompliance with projects that address future noncompliance. Such merging of previously distinct compliance categories would result in inconsistency with EPA definitions for compliance projects and would limit DOD’s ability to rank projects. Our 1993 report stemmed in part from congressional concern that the Air Force’s fiscal year 1993 budget request for environmental military construction was about twice as large as the other services’ requests combined. However, we found, during that review, that the Air Force funds most of its environmental compliance construction projects using military construction appropriations. The Army funds most of its environmental compliance construction projects with operation and maintenance appropriations. The Navy funds these projects using defense business operating funds and the Navy could not identify the source of appropriated funding used to reimburse the fund. Because of the variances in project definitions and funding sources, neither we nor DOD could compare the individual service programs. DOD’s data shows that the Air Force’s total environmental compliance cost was actually less than either the Army’s or the Navy’s. Figure 2 shows a decrease from 1993 to 1997 in DOD’s military construction funding to comply with environmental construction requirements. However, as we found in 1993, the costs are not representative of all environmental construction, since similar construction projects are also funded from other valid appropriations such as operation and maintenance and minor construction. DOD-wide estimates of fiscal year 1997 environmental compliance requirements to be funded under the military construction appropriation fell from $257 million in February 1995—when they were submitted to Congress as part of the fiscal year 1996/1997 biennial budget estimates—to $84 million in April 1996. However, neither we nor DOD could determine the extent of the reduction in the program from prior years because of continued inconsistencies in project definition (environmental or mission-related) and design (see pp. 4-8). Some reductions resulted from a lack of support for projects proposed in 1995 or decisions to fund at a later time. For example, the Air Force eliminated over $14 million of industrial wastewater pretreatment facilities at various installations because subsequent review at the major command level determined that support for the projects was inadequate. Officials at Langley Air Force Base, Virginia, also told us that they decided to reduce the generation of hazardous waste at the source. Air Force officials deferred two other military construction projects at Beale Air Force Base, California, and Dyess Air Force Base, Texas, to the future fiscal years’ environmental compliance program. Air Force data shows that the Air National Guard has removed a fiscal year 1997 underground storage replacement project from its military construction budget estimates, and the project may be funded with operation and maintenance funds. Other reductions can be attributed to reduced project scope resulting in lower estimates for individual projects. For example, the Navy reduced its $25.4 million estimate for an oily waste collection facility in San Diego, California, to $7.2 million based on a November 1994 Naval Audit Service report recommendation. Navy officials told us they are using a more effective, less costly method to treat the oily waste. In January 1996, the Naval Audit Service reported that the revised $7.2 million estimate was appropriate. However, in reviewing cost data provided by the Navy, we noted that the Navy’s current estimate for the project is still $24 million. Figure 3 shows a breakout of the $84 million estimate by service as of April 1996. DOD cannot adequately determine its environmental compliance construction needs and project priorities. The continuing lack of guidance and inconsistencies in the way DOD programs and funds projects inhibit DOD’s and Congress’ ability to provide overall management and effective program oversight. Given DOD’s response to our 1993 report that it believed more consistent guidance is unnecessary, the Subcommittee may wish to direct DOD to act now to ensure that projects are consistently funded and reported for the fiscal year 1998 budget submission to Congress or to no longer use environmental compliance to justify higher priority for military construction funding. In oral comments on a draft of this report, DOD officials generally agreed with our description of project funding and reporting. However, they did not agree with our findings and conclusion that more consistent guidance is needed to ensure that projects are consistently funded and reported, or with our related matter for congressional consideration. DOD officials stated that the environmental program, like other DOD programs, is integrated into the appropriations process in accordance with applicable law and guidance, and that commanders need the flexibility that the current congressional and DOD guidance provide in determining when it is appropriate to use operation and maintenance funds versus military construction funds for smaller projects. Officials suggested that the location and type of facilities frequently impact how the DOD components fund projects. For example, underground storage tanks collocated in a fuel farm or around an airfield may be more appropriately addressed as an entire area at one time, whereas tanks at a number of different sites could logically and legally be done with smaller projects, under either the military construction or operation and maintenance appropriation. Officials stated that while inappropriate classification of environmental projects is possible, it has not been a problem. We recognize the flexiability inherent in existing guidance concerning project design and funding. As stated in our 1993 report, however, our position is that DOD’s guidance is not comprehensive and does not ensure consistency in implementation. These inconsistencies, which are demonstrated in the examples cited throughout our report, inhibit analyzing DOD-wide data and estimating future requirements. Also, officials stated that the slight change in EPA category definitions (discussed on pp. 7 and 8) more clearly demonstrates the funding priorities than treating all future requirements in a single category regardless of their immediacy. Officials stated that EPA staff have accepted DOD’s changes. With regard to compliance category definitions, we believe the changes are substantive and not slight as characterized by DOD. EPA’s category definitions distinguished among projects to address situations (1) already out of compliance, (2) to be out of compliance by the end of the current year, and (3) to be out of compliance in future years’ budgets. We agree that EPA has accepted DOD’s definition to include all three in one category for the purposes of DOD’s report to Congress. However, it obtained DOD agreement to provide additional supporting information on individual projects. That information would allow EPA to categorize DOD’s projects under EPA definitions. We are monitoring DOD’s implementation of its revised definitions for the requester of this report and other requesters. Technical corrections have been incorporated where appropriate. To obtain information on DOD’s and the military services’ programming processes, we held discussions and obtained information from officials in EPA and in headquarters and field offices of DOD, the Army, Navy, Air Force, Marine Corps, and DLA. We also reviewed pertinent documents, laws, and regulations. To obtain information on DOD’s and the military services’ environmental requirements and costs, we reviewed budget reports and submissions for fiscal years 1994 through 1997 and service cost data. We compared the fiscal year 1997 biennial estimates with DOD’s estimates as of February 1996, and updated the 1997 estimates as of April 1996. We relied on the accuracy of DOD’s data in conducting our analysis and selectively verified data for certain projects. We visited and obtained information at the following military installations and major commands: Fort Sill, Oklahoma; Training and Doctrine Command, Virginia; Naval Facilities Engineering Command, Atlantic Division, Virginia; Norfolk Naval Base, Virginia; Commander in Chief, Atlantic Fleet, Virginia; Commander in Chief, Pacific Fleet, Hawaii; San Diego Naval Station, California; Edwards Air Force Base, California; Air Combat Command and Langley Air Force Base, Virginia; Tinker Air Force Base, Oklahoma; and Marine Corps bases at Camp LeJeune, North Carolina; Quantico, Virginia; and Camp Pendleton, California. We obtained additional information from the Air Force Materiel Command at Wright-Patterson Air Force Base, Dayton, Ohio; Kelly Air Force Base, Texas; and headquarters offices of the Air Force Reserve and the Air National Guard. We conducted our review between October 1995 and February 1996 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate House and Senate committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Defense Logistics Agency. Please contact me on (202) 512-8412 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Table I.1 summarizes the services’ estimated funding by project type during fiscal years 1994-96. Includes upgrades to and construction of wastewater and industrial wastewater facilities and sanitary and storm sewer systems. Includes de-icing facilities and upgrades to aircraft fuel and vehicle maintenance facilities. Excludes $3.5 million funded through the Defense Business Operating Fund. Includes upgrades to heating plants and corrosion control and blast/paint facilities. Includes the construction or upgrade of such projects as engine test facilities, above-ground fuel storage tanks, tank trail erosion, fuel containment dikes, consolidated fuel facilities, potable water facilities and pipelines, and other projects under $2 million each. Table I.2 summarizes projects for fiscal year 1997. Wastewater collection and treatment is estimated to be the most costly effort during this period. Includes upgrades to and construction of wastewater and industrial wastewater facilities and sanitary and storm sewer systems. Includes upgrades to heating plants and corrosion control and blast/paint facilities. Includes de-icing facilities and upgrades to aircraft fuel and vehicle maintenance facilities. Edwin J. Soniat Raul Cajulis The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the Department of Defense's (DOD) prioritization of environmental compliance construction projects, focusing on: (1) the DOD construction program process; and (2) DOD cost estimates for future projects. GAO found that: (1) since GAO's November 1993 report, the services have initiated actions intended to improve their processes for programming and prioritizing environmental compliance construction projects; (2) however, neither the current nor proposed DOD policy specifies how the services should report costs related to environmental compliance construction projects and how they should determine which appropriation account should provide the funds; (3) consequently, the services and DLA continue to vary the manner in which they classify and prioritize the projects and determine the source of funds for them; (4) the continuing lack of such guidance and the inconsistencies inhibit congressional oversight and DOD's program management; (5) DOD-wide estimates for fiscal year 1997 environmental compliance construction requirements fell from $257 million in February 1995 to $84 million in April 1996; (6) due to the lack of a uniform approach to categorizing such projects, GAO cannot determine whether this drop in funding is a result of a reduction in the need for such projects or simply a reflection of differing procedures for categorization; (7) the reasons for reductions fell into several different categories, for example, lack of documentation, decisions to fund in later years, or decreased project costs.
With a cost of about $12.3 billion, the 2010 Census was the most expensive population count in U.S. history, costing about 31 percent more than the $9.4 billion 2000 Census (in constant 2020 dollars). Some cost growth is to be expected because the population is growing and becoming more complex and difficult to count, which increases the Bureau’s workload. However, the cost of counting each housing unit has escalated from about $16 in 1970 to $92 in 2010 (in constant 2020 dollars), according to the Bureau. For the 2020 Census, the Bureau intends to limit its per-household cost to not more than that of the 2010 Census, adjusted for inflation. To achieve this goal, the Bureau is significantly changing how it conducts the census, in part by re-engineering key census-taking methods and infrastructure. The Bureau’s innovations include (1) using the Internet as a self-response option; (2) verifying most addresses using “in-office” procedures rather than costly field canvassing; (3) re-engineering data collection methods; and (4) in certain instances, replacing enumerator-collected data with administrative records (information already provided to federal and state governments as they administer other programs). The Bureau’s various initiatives have the potential to reduce costs. In October 2015, the Bureau estimated that with its new approach it can conduct the 2020 Census for a life-cycle cost of $12.5 billion, $5.2 billion less than if it were to repeat the design and methods of the 2010 Census (both in constant 2020 dollars). However, in June 2016, we reported that this $12.5 billion cost estimate was not reliable and did not adequately account for risk. Table 1 below shows the Bureau’s estimated cost savings it hopes to achieve in the following four innovation areas. The 2016 test was the latest major test of NRFU in the Bureau’s testing program. In 2014, the Bureau tested new methods for conducting NRFU in the Maryland and Washington, D.C., area. In 2015, the Bureau assessed NRFU operations, in Maricopa County, Arizona. In 2018, the Bureau plans to conduct a final “End-to-End” test which is essentially a dress rehearsal for the actual decennial. The Bureau needs to finalize the census design by the end of fiscal year 2017 so that key activities can be included in the End-to-End Test. The Bureau plans to conduct additional research through 2018 in order to further refine the design of the 2020 Census, but recently had to alter its approach. On October 18, 2016, the Bureau decided to stop two field test operations planned for fiscal year 2017 in order to mitigate risks from funding uncertainty. Specifically, the Bureau said it would stop all planned field activity, including local outreach and hiring, at its test sites in Puerto Rico, North and South Dakota, and Washington State. The Bureau will not carry out planned field tests of its mail-out strategy and follow up for non-response in Puerto Rico, or its door-to-door enumeration. The Bureau also cancelled plans to update its address list in the Indian lands and surrounding areas in the three states. However, the Bureau will continue with other planned testing in fiscal year 2017, such as those focusing on systems readiness and internet response. Further, the Bureau said it would consider incorporating the cancelled field activities elements within the 2018 End-to-End Test. The Bureau maintains that stopping the 2017 Field Test will help prioritize readiness for the 2018 End-to-End Test, and mitigate risk. Nevertheless it also represents a lost opportunity to test, refine, and integrate operations and systems, and puts more pressure on the 2018 Test to demonstrate that enumeration activities will function as needed for 2020. NRFU generally proceeded according to the Bureau’s operational plans. However, our observations and the Bureau’s preliminary data at both test sites found that (1) there were a large number of non-interviews, and (2) enumerators had difficulty implementing new census-taking procedures. The Bureau’s 2016 Census Test included a new field management structure that, among other things, included an enhanced operations control system supporting daily assignments of cases. A cornerstone of the Bureau’s efforts to reduce the cost of NRFU is the automation of decision-making on how to manage the follow-up caseload. Unlike previous censuses and one prior test, enumerators in the 2016 Census Test did not have an assigned set of cases that they alone would work until completion. Instead, the Bureau relied on an enhanced operational control system that was designed to provide daily assignments and street routing of non-response follow-up cases to enumerators in the most optimal and efficient way. The Bureau first tested this system in the 2015 Census Test. The test also included streamlined procedures for making contact at large apartment buildings. This was intended to reduce repeated attempts to contact property managers. A key objective of the 2016 Census Test was to refine procedures for collecting NRFU data from households using mobile devices leased from a contractor. In prior decennials, enumerators collected NRFU information using paper and pencil. The Bureau believes that replacing paper-based operations with automated case management and mobile devices for collecting interview data will provide a faster, more accurate, and more secure means of data collection in the 2020 Census (see figure 1). Some test activities that we observed at both test sites included streamlined multi-unit contact procedures and interviews with a proxy respondent. A proxy is someone who is a non-household member, at least 15 years old, and knowledgeable about the NRFU address. At multi- unit structures such as apartment buildings, the enumerator is trained to first interview the property manager to find out which units were occupied and which were vacant on Census Day. Such interviews help to streamline NRFU by removing vacant units from an enumerator’s workload. They also help build a rapport with property managers by ensuring they know when enumerators are working in their building and can also help them gain access to locked buildings. Preliminary data at both test sites indicate that the Bureau experienced a large number of non-interviews. According to the Bureau, non-interviews are cases where either no data or insufficient data were collected, in part because the cases reached the maximum number of six attempted visits without success or were not completed due to, for example, language barriers or dangerous situations. While not necessarily a precursor to the 2020 non-interview rate, because of its relationship to the cost and quality of the count, it will be important for the Bureau to better understand the factors contributing to it. According to preliminary 2016 Census Test data, there were 19,721 NRFU cases coded as non-interviews in Harris County, Texas and 14,026 in Los Angeles County, California, or about 30 and 20 percent of the test workload respectively. In such cases, the Bureau may have to impute attributes of the household based on the demographic characteristics of surrounding housing units as well as administrative records. Bureau officials expect higher numbers of non-interviews during tests in part because, compared to the actual enumeration, the Bureau conducts less outreach and promotion. Bureau officials hypothesized that another contributing factor could be related to NRFU methods used in the 2016 test compared to earlier decennials. For the 2010 and earlier decennials, enumerators collected information during NRFU using pencil and paper. Enumerators may have visited a housing unit more than the 6 maximum allowable visits to obtain an interview but did not record all of their attempts, thus enabling them to achieve a higher completion rate. For the 2020 Census, and as tested in 2016, the Bureau plans to collect data using mobile devices leased from a contractor, and an automated case management system to manage each household visit. The Bureau believes that this approach will provide a faster, more accurate, and more secure means of data collection. At the same time, the mobile device and automated case management system did not allow an enumerator to attempt to visit a housing unit more than once per day, reopen a closed case, or exceed the maximum allowable six attempts. One factor we observed that may have contributed to the non-interview rate was that enumerators did not seem to uniformly understand nor follow procedures for completing interviews with proxy respondents. According to the 2016 Census Test enumerator training manual, when an eligible respondent at the address cannot be located, the automated case management system on the mobile device will prompt the enumerator when to find a proxy to interview, such as when no one is home or the housing unit appears vacant. In such circumstances, enumerators are to find a neighbor or landlord to interview. However, in the course of our site visits, we observed that enumerators did not always follow these procedures. For example, one enumerator, when prompted to find a proxy, looked to the left and then right and, finding no one, closed the case. Similarly, another enumerator ignored the prompt to find a proxy and explained that neighbors are usually not responsive or willing to provide information about the neighbor, and did not seek to find a proxy. Enumerators we interviewed did not seem to understand the importance of obtaining a successful proxy interview, and many appeared to have received little encouragement during training to put in the effort to find a proxy. Proxy data for occupied households are important to the success of the census as the alternative is a non-interview. In 2010, about one-fourth of the NRFU interviews for occupied housing units were conducted using proxy data. We shared our observations with Bureau officials who told us that they are aware that enumerator training for proxies needs to be revised to convey the importance of collecting proxy data when necessary. Converting non-interviews by collecting respondent or proxy data can improve interview completion rates, and ultimately the quality of census data. The Bureau told us it will continue to refine procedures for 2020. According to the Bureau, its plans to automate the assignment of NRFU cases have the potential to deliver significant efficiency gains. At the same time, refinements to certain enumeration procedures and better communication could produce additional efficiencies by enabling the Bureau to be more responsive to situations enumerators encounter in the course of their follow-up work. Enumerators were unable to access recently closed incomplete cases. Under current procedures, if an enumerator is unable to make contact with a household member, the case management system closes that case and it is to be reattempted at a later date, perhaps by a different enumerator, assuming the enumerator has not exceeded six attempts. Decisions on when reattempts will be made—and by whom—are automated and not designed to be responsive to the immediate circumstances on the ground. This is in contrast to earlier decennials when enumerators, using paper-based data collection procedures, had discretion and control over when to re-attempt cases in the area where they were working. According to the Bureau, leaving cases open for re- attempts can undermine the efficiency gains of automation when enumerators depart significantly from their optimized route, circling back needlessly to previously attempted cases rather than progressing through their scheduled workload. During our test site observations, however, we preliminarily found how this approach could lead to inefficiencies in certain circumstances. For example, we observed enumerators start their NRFU visits in the early afternoon as scheduled, when many people are out working or are otherwise away. If no one answered the door, those cases were closed for the day and reassigned later. However, if a household member returned while the enumerator was still around, the enumerator could not reopen the case and attempt an interview. We saw this at both test site locations, typically in apartment buildings or at apartment-style gated communities, where enumerators had clear visibility to a large number of housing units and could easily see people arriving home. Bureau officials acknowledged that closing cases in this fashion represented a missed opportunity and plan to test greater flexibilities as part of the 2018 End-to-End Test. Programming some flexibility into the mobile device—if accompanied with adequate training on how and when to use it—should permit completion of some interviews without having to deploy staff to the same case on subsequent days. This in turn could reduce the cost of follow-up attempts and improve interview completion rates. Enumerators did not understand procedures for visits to property managers. Property managers are a key source of information on non- respondents when enumerators cannot find people at home. They can also facilitate access to locked buildings. Further, developing a rapport with property managers has helped the NRFU process, such as when repeated access to a secured building or residential complex is needed on subsequent days by different enumerators. In response to problems observed during the Bureau’s 2014 and 2015 Census tests and complaints from property managers about multiple uncoordinated visits by enumerators, the Bureau’s 2016 Census Test introduced specific procedures to conduct initial visits to property managers in large multi-unit apartment buildings. The procedures sought to identify up front which, if any, units needing follow-up at the location were vacant, eliminating the need for enumerators to collect this information from property managers with subsequent visits on a case-by- case basis. According to Bureau officials, the automated case management system was designed to allow for an enumerator to make up to three visits to property managers to remove vacant units. According to the Bureau, the 2016 Census Test demonstrated that vacant units could quickly be removed from the NRFU workload using these procedures in cases where a property manager was readily available; however, in other cases the procedures caused confusion. For example, whenever an initial visit was unsuccessful, all of the cases at that location—up until then collated into only one summary row of the enumerator’s on-screen case list—would suddenly expand and appear as individual cases to be worked, sometimes adding several screens and dozens of cases to the length of the list, which enumerators we spoke with found confusing. Furthermore, without the knowledge of which units were vacant, enumerators may have unnecessarily made visits to these units and increased the cost and the time required to complete NRFU . During debriefing sessions the Bureau held, Bureau enumerators and their supervisors identified training in these procedures as an area they felt needed greater attention in the future. Indeed, while training classes included a case study exercise on interviewing a property manager, this exercise in the enumerators training manual gives no warning to enumerators and does not refer to the procedures. Bureau officials said that they are pleased with the progress the test demonstrates they have made in automating case management at multi-unit locations a priority. They added that they recognize the need to better integrate procedures in the training moving forward. Timing of return visits did not leverage information on respondent availability. During our field visits, we encountered several instances where enumerators had been told by a respondent or otherwise learned that returning at a specific time on a later date would improve their chance of obtaining an interview from either a household respondent or a property manager. But the Bureau’s 2016 Census Test and automated case management did not have an efficient way to leverage that information. Attempting contact at non-responding households at times respondents are expected to be available can increase the completion rate and reduce the need to return at a later date or rely on proxy interviews as a source of information. The Bureau’s automated case management system assigned cases to 6- hour time windows after estimating hour-by-hour probabilities of when best to contact people. The estimation relied on various administrative records, information from other Bureau surveys that had successful contacts in the past, as well as area characteristics. The 2016 Census Test did not have a way to change or update these estimates when cases were subsequently reassigned. The goals of assigned time windows were intended to result in more productive visits and reduce costs. When enumerators identified potentially better times to attempt a contact, they were instructed to key in this information into their mobile devices. For example, one enumerator keyed in a mother’s request to come back on Thursday afternoon when her kids were in camp, while others keyed-in information like office hours and telephone contact numbers obtained from signs on the property they had seen for property managers. However, according to the Bureau, this updated information went unused, and we met enumerators who had been assigned to enumerate addresses at the same unproductive time after they had written notes documenting other better times to visit. Another enumerator reported visiting a property manager who complained that the enumerator was not honoring the manager’s earlier request made during a prior enumeration attempt that an enumerator return during a specified time window. Such repeat visits can waste enumerator time (and miles driven), and contribute to respondent burden or reduced data quality when respondents become annoyed and may become less cooperative. We discussed our preliminary observation with managers at the test sites, who expressed frustration that the automated case management system did not allow them to record the locally-obtained data on when to contact people whom they found in enumerator notes in a way to affect future case assignment. Headquarters staff told us that while they have not fully evaluated this yet, they are concerned that providing local managers with too much flexibility to override the results of optimized case and time assignments would undermine the efficiency gains achievable by the automation. They also explained that enumerators were to have been provided capability to record what day or what time of day for follow-up. This information could have been used by the automated case management to better target the timing of future assignments. However, they acknowledged that this procedure may not have been either fully implemented or explained during enumerator training. Bureau officials have said that this is another area they are planning to address. The Bureau has reengineered its approach to building its master address list for 2020. Specifically, by relying on multiple sources of imagery and administrative data, the Bureau anticipates constructing its address list with far less door-to-door field canvassing compared to previous censuses. One major change the Bureau is making consists of using in-office address canvassing–a two-phase process that systematically reviews small geographic areas nationwide, known as census blocks, to identify those that will not need to be canvassed in the field, as shown in figure 2. The Bureau estimates that the two phases of in-office canvassing will result in roughly 25 percent of housing units requiring in-field canvassing, instead of canvassing nearly all housing units in the field as done previously. With in-office address canvassing clerks compare current aerial imagery for a given block with imagery for that block dating to the time of the last decennial census in 2010. During this first phase, called Interactive Review, specially trained clerks identify whether a block appears to have experienced change in the number of housing units, flagging each block either as stable—free of population growth, decline, or uncertainty in what is happening in the imagery over time—or “active,” in which case it moves to the next phase. Addresses in stable blocks are not marked for in-field canvassing. For blocks where change is detected or suspected, the Bureau uses a second phase of in-office canvassing, known as Active Block Resolution, to attempt to resolve the status of each address and housing unit in question within that block. During this phase, clerks use aerial imagery, street imagery, and data from the U.S. Postal Service, as well as from state, local, and tribal partners when reviewing blocks. If a block can be fully resolved during this phase of in-office canvassing, the changes are recorded in the Bureau’s master address file. If a block cannot be fully resolved during the second phase of in-office canvassing, then the entire block, or some portion of the block, is flagged for inclusion in the in-field canvassing operation. In-office address canvassing began in September 2015 with plans for a first pass of the entire country to be completed by the end of fiscal year 2018. In-field canvassing for the 2020 Census is scheduled to begin in August 2019. Another major change the Bureau is making for its re-engineered address canvassing is significantly expanding the role that state, local, and tribal partners can play throughout the decade in contributing to an accurate, more up-to-date address list. Through the Geographic Support Systems Initiative, begun in fiscal year 2011, partner jurisdictions have been providing address and spatial data to the Bureau to help validate and supplement the Bureau’s address list. As of October 2016, the Bureau reported that it had received partner data covering 73 percent of all known housing units nationwide. It added that the vast majority of the addresses in the files that the Bureau had processed as of July 2015 have either been matched with existing addresses in its database, or added to the address list. As with previous decennial censuses, as directed by Congress, the Bureau will also engage with state, local, and tribal partners through its Local Update of Census Addresses program in fiscal years 2018 and 2019 in order to ensure that jurisdictions have the ability to comment on the address list prior to enumeration. The Bureau plans to rely on the in-office part of address canvassing to validate a large part of the addresses added to the list during that program where data are available to permit it. The Bureau is testing its re-engineered address canvassing operation in two sites through December 2016—in Buncombe County, North Carolina, and St. Louis, Missouri. In-office canvassing for the test sites began at the Bureau’s National Processing Center in Jeffersonville, Indiana, in August 2016. The exercise will test the Bureau’s assumptions about the cost and effectiveness of the re-engineered approach, as well as the quality of in- office canvassing, field staff training, and the use of new collection geography in the field. In addition to the 100 percent in-office canvassing the Bureau plans for 2020, the Bureau will also canvass 100 percent of the test areas in the field so that it can compare results it obtains for blocks where it would not otherwise have gone door to door. The Bureau hired 262 in-field listers across both sites to conduct the door-to-door canvassing, also beginning in October with a relisting operation commencing in November. Although the innovations the Bureau is planning with its reengineered address canvassing have the potential to reduce costs, they entail some risks that could affect the cost or quality of the address canvassing operation. According to the Bureau, these risks include: Locating Hidden Housing Units. The Bureau recognizes that certain kinds of dwellings are hard to identify and may not have been marked as housing units at the time of address list development and not included in any databases. This could lead to their being missed and occupants not being counted in the census. These units are referred to as hidden housing units and include such living arrangements as attics, basements, or garages converted into housing units. According to the Bureau, while in-field canvassing also has similar risks for missing these types of housing units, solely relying on the use of imagery to identify these units could lead to an incomplete address list. Monitoring Change in the Housing Stock. When the Bureau determines during the first phase of in-office canvassing that a block has not experienced population change, the Bureau plans to subject the block to later monitoring so that if later change is detected, the block can be reassigned for further review. The Bureau has developed the conditions or “triggers” for subjecting blocks to later monitoring, but has not yet determined how it will operationalize them. According to the Bureau, if the triggers that the Bureau is developing for this process do not adequately detect recent change, then housing unit growth may be missed, and the resulting address list may not be up-to-date. Obtaining Quality Data. For the Bureau to adequately review enough blocks in-office–and therefore reduce field costs of door-to-door canvassing–the Bureau needs to have data of sufficient quality to make reliable determinations about changes in housing units within those blocks. According to the Bureau, if it does not obtain sufficient satellite imagery (covering areas with both current and prior census imagery) or address and spatial data from state/local/tribal partners, then it may be forced to send more blocks than planned to in-field canvassing. We have ongoing audit work examining the Bureau’s re-engineered address canvassing approach. The justification of key cost and data quality assumptions, the approaches to mitigating key risks, and the Bureau’s adherence to timelines and canvassing schedules are all subjects of our ongoing work, which we plan to report on early next year. The Bureau goes to great lengths each decade to improve specific census-taking activities. But these incremental modifications have not kept pace with societal changes that make the population increasingly difficult to locate and cost-effectively count. This increasing difficulty and escalating costs led the Bureau to re-engineer its approach for the 2020 Census. While preparations for 2020 are still underway, and with testing still occurring, the Bureau’s experience in planning for 2010 can enhance its readiness for 2020. For example, as the Bureau continues its planning efforts for 2020, our prior work indicates that it will be essential for it to address the following three lessons learned: Ensure key census-taking activities are fully tested Develop and manage on the basis of reliable cost estimates Sustain workforce planning Ensure key census-taking activities are fully tested. The census is a large, complex operation comprised of thousands of moving parts, all of which must function in concert with one another to secure a cost-effective count. While the census is under way, the tolerance for any breakdowns is quite small. Given this difficult operating environment, rigorous testing is a critical risk mitigation strategy because it provides information on the feasibility and performance of individual census-taking activities, their potential for achieving desired results, and the extent to which they are able to function together under full operational conditions. Given the new four innovation areas for the 2020 Census, it will be imperative that the Bureau have systems and operations in place for the 2018 End-to-End Test that will take place in three locations, covering more than 700,000 housing units in total. The 2018 test locations are: Pierce County, Washington; Providence County, Rhode Island; and the Bluefield- Beckley-Oak Hill area of West Virginia. In our prior work on testing done for the 2010 Census, we noted that a sound study design should include such components as: clearly stated objectives with accompanying performance measures; research questions linked to test objectives and, as appropriate, a clear rationale for why sites were selected for field tests; a thoroughly documented data collection strategy; input from stakeholders and lessons learned considered in developing test objectives; and a data analysis plan including, as appropriate, methods for determining the extent to which specific activities contribute to controlling costs and enhancing quality. Develop and manage on the basis of reliable cost estimates. Reliable cost estimates that appropriately account for risks facing an agency can help an agency manage large complex activities like the 2020 Census, as well as help Congress make funding decisions and provide oversight. Cost estimates are also necessary to inform decisions to fund one program over another, to develop annual budget requests, to determine what resources are needed, and to develop baselines for measuring performance. The Bureau has a history of unreliable cost estimation and resultant overruns. For example, we placed the Decennial Census on our High Risk list in 2008 in part due to weaknesses in the Bureau's estimation of its 2010 Census life-cycle cost. Recently, we reported in our review of the Bureau’s October 2015 life- cycle cost estimate that in order for the Bureau to improve its ability to control the cost of the 2020 Census, it will be critical for it to have better control over its cost estimation process. While we found that the Bureau has taken significant steps toward improving its capacity to produce reliable cost estimates, those efforts had not yet resulted in a reliable decennial cost estimate. Among the four broad characteristics of a reliable cost estimate—none of which the Bureau fully met—the Bureau reported it was focusing its attention on improving the documentation of the cost estimate, in order to help improve other characteristics as well. While poor documentation affected our ability to assess the reliability of the Bureau’s cost estimate’s other characteristics, we believe the problems we observed related to an absence of internal control procedures over the cost estimation process, which resulted in poor documentation. Furthermore, we found the Bureau lacked guidance to control the cost estimation process. Investment in the planning documents to help control and support cost estimation early in the estimation cycle, such as with an operational plan, guidance on key steps and process flows, assignment of responsibilities, and job aids for staff can help institutionalize practices and ensure that otherwise disparate parties in the process operate consistently. As we reported, taking steps to ensure its cost estimate is reliable would help improve decision-making, budget formulation, progress measurement, course correction when warranted, and accountability for results. We made three recommendations including that the Bureau take specific steps to ensure its cost estimate meets the characteristics of a high- quality estimate and improve control over how risk and uncertainty are accounted for in cost estimation, with which the Department of Commerce agreed. Bureau officials have stated that they plan to address the recommendations with their update of the 2020 Census Lifecycle Cost Estimate in December 2016. We plan to assess this cost estimate as soon as it is available. Sustain attention to workforce planning. Strategic workforce planning encourages agency managers and stakeholders to systematically consider what is to be done, when and how it will be done, what skills will be needed, and how to gauge progress and results. Sustained workforce planning can help the Bureau stay on track for the 2020 Census and help avoid past staffing problems. For example, a Bureau assessment of its experience with the 2010 Census observed that areas such as the management of large programs and projects, cost estimation, and information technology (IT) lacked staff with core skills and experience. Moreover, the Bureau’s experience with the 2010 Census and prior enumerations has shown that not following leading practices in workforce planning can increase the risks of subsequent downstream operations, such as cost estimation. In 2012 we reported that while the Bureau’s workforce planning efforts were generally consistent with such key leading practices as identifying current and future critical occupations, the Bureau had not coordinated workforce planning efforts across its directorates for key occupations. Without a Bureau-wide competency assessment, for instance, the Bureau risked not having the necessary workforce in place to manage the multimillion dollar IT investments for its 2020 operations. We found the Bureau also needed to address having inadequately trained cost estimating staff so that it could produce credible, comprehensive, and accurate cost estimates. Moreover, the Bureau needed to devote greater attention to setting goals and monitoring progress for skills gaps—as well as engaging stakeholders in developing, communicating, and implementing its workforce plan—so that the Bureau could identify and avoid possible workforce plan implementation barriers. Since that time, the Bureau has taken actions in response to our recommendations to coordinate and set goals for its workforce planning. For example, in September 2014, the Bureau drafted action plans to address the skills gaps that had been identified as part of a Bureau-wide competency assessment. The Bureau has indicated that a 2020 directorate-wide workforce assessment report is in its final review stages and will include a comprehensive succession planning strategy. These actions taken by the Bureau to incorporate key leading workforce planning practices will help the Bureau meet its objective of having a workforce matched with the demands of the 2020 Census. Going forward, a sustained focus on workforce planning will be necessary to ensure the Bureau will be in a position to hire the optimal mix of managers and technical experts to carry out a cost-effective census. In summary, the key innovations the Bureau plans for 2020 show promise for controlling costs and maintaining accuracy, although there are significant risks involved. The Bureau is aware of these risks, and robust testing can help manage them by assessing the feasibility of key activities, their capacity to deliver desired outcomes, and their ability to work in concert with one another under operational conditions. While the Bureau decided to stop key field testing planned for fiscal year 2017 in order to mitigate a funding risk, this decision may have consequences for elements of field operations not getting tested as a result, and, ultimately, for the 2020 Census. Going forward, once the Bureau has the test results, past experience has also shown the importance of refining operations as needed based on the results of the tests, incorporating lessons learned from 2010 as appropriate, and making needed changes to its design in time to be included in the Bureau’s End-to-End Test scheduled for 2018. Chairman Meadows, Ranking Member Connolly, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have. If you have any questions on matters discussed in this statement, please contact Robert Goldenkoff at (202) 512-2757 or by e-mail at [email protected]. Other key contributors to this testimony include Lisa Pearson, Assistant Director; Mark Abraham, Peter Beck; Devin Braun; Jeff DeMarco; Robert Gebhart; Emily Hutz; Richard Hung; Donna Miller; Ty Mitchell; Kayla Robinson; Kathleen Padulchick; Robert Robinson, and Timothy Wexler. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
With a life-cycle cost of about $12.3 billion, the 2010 Census was the most expensive U.S. census in history. To help control costs and maintain accuracy, the 2020 Census design includes new procedures and technology that have not been used extensively in earlier decennials, if at all. While these innovations show promise for a more cost-effective head count, they also introduce risks. As a result, it is important to thoroughly test the operations planned for 2020.This testimony focuses on (1) the preliminary results to date of the Bureau's 2016 Census Test in Los Angeles County, California, and Harris County, Texas; (2) the Bureau's plans for the upcoming test of address canvassing procedures in Buncombe County, North Carolina, and St. Louis, Missouri; and (3) the lessons learned from the 2010 Census that can be applied to preparations for 2020. This testimony is based on GAO's ongoing reviews of the 2016 Census Test and Address Canvassing Test. For these studies, GAO reviewed Bureau documents and preliminary data, interviewed Bureau officials, and made site visits to observe census operations. This testimony is also based on prior GAO work on lessons learned from the 2010 Census. GAO preliminarily found that during the 2016 Census Test, nonresponse follow-up (NRFU), where enumerators visit households that did not respond to the census, generally proceeded according to the Census Bureau's (Bureau) operational plans. However, data at both test sites indicate that the Bureau experienced a large number of non-interviews. The Bureau considers non-interviews to be cases where either no data or insufficient data are collected. Bureau officials are not certain why there were so many non-interviews for the test and are researching potential causes. In addition, the Bureau's plan to automate the assignment of NRFU cases to enumerators has the potential to deliver significant efficiency gains as compared to paper-based operations conducted in previous decennial censuses, according to the Bureau. GAO preliminarily found that improvements to certain enumeration procedures and better training could produce additional efficiencies by enabling the Bureau to be more responsive to situations enumerators encounter on the ground. These improvements include providing more flexible access to recently closed, incomplete cases; enumerator interview training with multi-unit property managers; and operational procedures to make use of local data on the best time to attempt interviews. The Bureau has reengineered its approach to building its master address list for 2020 in part by introducing a two-phase "in-office" process that systematically reviews small geographic areas nationwide. The goal is to limit the more expensive and traditional door-to-door canvassing to those areas most in need of updating, such as areas with recent housing growth. The in-office phases rely on aerial imagery, street imagery, geographic information systems and address file data from state, local, and tribal partners. The Bureau estimates that the new process will result in about 25 percent of housing units requiring field canvassing compared to the traditional process where all housing units were canvassed. The Bureau has identified a series of risks that could affect the cost or quality of the address canvassing operation, including locating hidden housing units such as converted garages, monitoring change in housing stock, and obtaining quality data. The Bureau is testing its reengineered address canvassing operation in two sites through December 2016--in Buncombe County, North Carolina, and St. Louis. The Bureau's experience in planning for the 2010 decennial can enhance its readiness for 2020. Going forward, GAO's prior work indicates it will be important for the Bureau to address several key lessons learned, including: (1) ensuring key census-taking activities are fully tested, (2) developing and managing on the basis of reliable cost estimates, and (3) sustaining workforce planning efforts to ensure it has the optimal mix of skills to cost-effectively conduct the enumeration. GAO has made several recommendations to the Census Bureau in prior reports on cost estimation and workforce planning. The Bureau has implemented the workforce planning recommendations, and agreed with and plans to implement the cost estimation recommendations.
Traditional insurance companies sell insurance to the public and are subject to the licensing requirements and oversight of each state in which they operate. The licensing process allows states to determine if an insurer domiciled in another state but operating in their state meets the nondomiciliary state’s regulatory requirements before granting the insurer permission to operate in their state. According to NAIC’s uniform application process, which has been adopted by all states, an insurance company must show that it meets the nondomiciliary state’s minimum statutory capital and surplus requirements, identify whether it is affiliated with other companies (that is, part of a holding company system), and submit biographical affidavits for all its officers, directors, and key managerial personnel. After licensing an insurer, regulators in nondomiciliary states can conduct financial examinations, issue an administrative cease-and-desist order to stop an insurance company from operating in their state, and withdraw the company’s license to sell insurance in the state. In addition, most nondomiciliary states have “seasoning requirements” that call for an insurance company to successfully have operated in its state of domicile for anywhere from 1 to 5 years before it can qualify for a license. Although RRGs have some regulatory relief due to the lead state regulatory framework established under LRRA, they still are expected to comply with certain other laws administered by nondomiciliary states. For example, RRGs must pay applicable taxes on premiums and other taxes imposed by nondomiciliary (as well as domiciliary) states. LRRA also imposes other measures that offer protections or safeguards to RRG members including the requirement that each RRG must submit to the domiciliary state insurance regulator a plan of operation or feasibility study that includes the coverages, deductibles, coverage limits, rates, and rating classification system for each line of insurance the RRG intends to offer. The RRG must (1) provide a copy of the plan or study to the insurance regulator in the nondomiciliary states in which the RRG intends to conduct business before it can write any insurance coverage in that state; (2) provide a copy of the group’s annual financial statement (certified by an independent public accountant) to the insurance commissioner of each state in which it is doing business (the financial statement should include a statement of opinion on loss and loss adjustment expense reserves by a qualified loss reserve specialist or actuary); and (3) submit to an examination by a nondomiciliary state regulator to determine the RRG’s financial condition, if the domiciliary state regulator has not begun or refuses to begin an examination. Nondomiciliary, as well as domiciliary, states also may seek an injunction in a “court of competent jurisdiction” against RRGs that they believe are in hazardous financial condition. RRGs are not the only form of self-insurers. “Captive insurance companies” (captives), also chartered and regulated by states, are established by single companies or groups of companies to self-insure their own risks. States chartering captives offer some regulatory relief to these companies based on the presumption that owners of captive companies have sophisticated knowledge about managing their risks and would protect their own interests. States can charter RRGs under regulations intended for traditional insurers or for captives. Non-RRG captives exist largely to cover the risks of their parent, which can be one large company (pure captive) or a group of companies (group captives). Group captives share certain similarities with RRGs because they also comprise several companies, but group captives, unlike RRGs, do not have to insure similar risks. Further, captives may provide property coverage, while RRGs currently may not. Regulatory requirements for captives generally are less restrictive than those for traditional insurers. However, non-RRG captives, like traditional insurance companies, generally cannot conduct insurance transactions in any state except their domiciliary state, unless they become licensed in that other state. State insurance regulators that oversee both traditional insurers and RRGs participate in NAIC’s voluntary accreditation program for the regulation of insurers’ financial solvency. NAIC accreditation is a certification given to a state insurance department once it has demonstrated it has met and continues to meet an assortment of legal, financial, and organizational standards. According to NAIC officials, all 50 state insurance departments and the District of Columbia were accredited as of March 2011. NAIC developed its Financial Regulation Standards and Accreditation Program in 1989 and adopted its formal accreditation program in June 1990. The mission of the program is to establish and maintain standards to promote sound insurance company financial solvency regulation. To execute this mission, NAIC assesses how each state insurance department reviews and monitors the solvency regulation of multistate insurance companies and RRGs to ensure states have (1) adequate solvency laws and regulations to protect consumers, (2) effective financial analysis and examination processes, and (3) appropriate organizational and personnel practices. Based on data reported by RRGs to NAIC since 2004, RRGs in aggregate have shown an increase in premiums written and in their share of the broader commercial liability market. In 2005, we reported that RRGs wrote about $1.8 billion of commercial liability coverage, which constituted about 1.17 percent of the overall market in 2003. According to NAIC data, in 2010 RRGs wrote about $2.5 billion in premiums, which was about 3 percent of the total $92 billion of commercial liability insurance coverage written industrywide. An analysis of direct written premiums by dollar amount indicates that between 2004 and 2010, the largest percentage of RRGs (31 to 37 percent) wrote premiums between $1 million and $5 million (see fig. 1). Of the almost $92 billion of commercial liability insurance written industrywide in 2010, about $10.6 billion was written in the medical professional liability line—also known as medical malpractice. In an analysis of the premiums written for the medical professional liability line, RRGs had a higher share of this specific market compared with their share of the overall commercial liability market. RRGs wrote about 13 percent ($1.4 billion of the total $10.6 billion) of medical professional liability insurance in 2010 (see fig. 2). We further discuss growth in the number of RRGs offering health care-related insurance later in this section. Based on several measures of financial strength or profitability, the RRG industry as a whole generally reported year-to-year gains from 2004 to 2010 (see fig. 3). A key factor in determining an insurer’s overall financial strength is capital and surplus—also known as policyholder surplus— which reflects the amount by which an insurer’s assets exceed its liabilities. Regulators require insurers to maintain adequate surplus so that an insurer can remain solvent even in the face of greater losses than predicted or lower earnings than projected. One of the indicators used to measure the adequacy of policyholder surplus is the ratio of an insurer’s premiums written to its policyholder surplus, which measures an insurer’s ability to pay claims given the volume of premiums written. A lower ratio of premiums written to surplus means an insurer has more net assets available relative to the amount of premiums written. According to the NAIC’s Financial Analysis Handbook–Property/Casualty Edition and other general benchmarking guidelines from NAIC officials, the net written premium-to-surplus ratios for property/casualty insurers in general would receive regulatory scrutiny for excessive leverage risk concerns for ratios greater than 250 to 300 percent, depending on the particular line of insurance. If an insurer’s ratio exceeds this range, a state regulator may conduct additional analyses of the insurer’s financial solvency. According to NAIC officials, there is not an established benchmark for an acceptable premium-to-surplus ratio for the RRG industry. An analysis of NAIC data shows that on average, the industry’s net written premium to policyholder surplus declined from 2004 to 2010, indicating that the financial strength of the industry during this time period has likely either improved or remained stable (see fig. 3). Another indicator of financial strength is return on policyholder surplus, or return on equity (ROE). ROE is generally calculated as the ratio of net income to equity, or in the case of insurers, policyholder surplus. From 2004 to 2010, the average ROE in the RRG industry fluctuated, with a high of 13.4 percent in 2008 and a low of 5.1 percent in 2010 (see fig. 4). While no clear trend was visible over the 7-year period we analyzed, the average ROE for each year generally indicated profitability for the RRG industry. The combined ratio is another measure of an insurer’s financial strength and profitability. This ratio shows the claims and related expenses incurred by an insurer as a percentage of the premiums earned. According to NAIC officials, a combined ratio of less than 100 indicates an underwriting profit (gain)—that is, premiums collected were higher than the claims paid and related expenses—while a combined ratio above 100 can be an indicator of an unprofitable insurer that could be in a hazardous financial condition. An analysis of NAIC data shows that the average combined ratio for RRGs that filed financial statements ranged from a high of 92.6 percent in 2005 to a low of 88 percent in 2008 (see fig. 5). The average combined ratio in 2010 was 90.2. Also based on NAIC data, the percentage of RRGs with a combined ratio above 100 fluctuated from 2006 to 2010 (see fig. 6). For example, 36 percent of the RRGs writing premiums in 2006 had a combined ratio above 100. These percentages increased from 2007 to 2009, with a high of 43 percent in 2009, and decreased to about 37 percent in 2010. Together, these data indicate that while most RRGs appear to have been profitable in any one year, a sizeable but relatively stable percentage in each year could have experienced some financial challenges. Although the reported financial condition of RRGs appeared favorable in most years since 2004, according to NAIC officials, the recent financial crisis also affected the RRG industry. Capital sources for RRGs became more constrained as banks became more stressed and tightened their lending practices, prompting concern by state regulators about the financial condition of some RRGs. Industry participants with whom we spoke said that some RRGs may have found the experience especially challenging, particularly in instances in which the RRGs were in part capitalized by letters of credit from financial institutions adversely affected by the recent financial crisis. An NAIC official said that similar to the rest of the insurance industry, RRGs have earned less income on their investments. In addition, one insurance regulator said that some RRGs had invested in the real estate market, and the resulting devaluation of these assets affected their balance sheets, particularly those of smaller RRGs. In 2004 and 2010, most RRGs were concentrated on health care-related lines of business. According to data from the Risk Retention Reporter, in both years the top four business lines for RRGs in terms of gross premiums were (1) health care; (2) professional services; (3) government and institutions; and (4) property development (see fig. 7). The majority of RRGs licensed in 2004 and 2010 offered health care- related insurance (see fig. 8). According to our analysis of data from the Risk Retention Reporter, 148 of the 153 health care-related RRGs (97 percent) wrote medical malpractice coverage in 2010. The medical malpractice industry generally has been characterized as volatile because of the risks associated with providing this line of insurance. Health care providers sought alternative sources of insurance after some of the largest medical malpractice insurance providers exited the market because of declining profits, partly caused by market instability and high and unpredictable losses—factors that contribute to the high risk of providing medical malpractice insurance. According to an RRG industry representative, although the overall liability insurance market currently is soft—which may be described as a period during which premiums are low, capital and competition are high, and demand for RRGs is lower—the RRG industry has continued to grow, especially in the area of medical malpractice coverage. Nine of the 13 state insurance regulators we interviewed affirmed that the majority of RRGs domiciled or operating in their states provide insurance for various health care-related lines, such as medical malpractice and liability insurance for nursing homes. Although they conducted business nationwide, similar to what we reported in 2005 more than 80 percent of active RRGs in 2010 were domiciled in five states and the District of Columbia. Based on an analysis of data from NAIC, the states with the most domiciled RRGs as of 2010 were Vermont, South Carolina, the District of Columbia, Nevada, Arizona, and Hawaii (see fig. 9). Montana, which was not one of the leading domiciliary states when we reported in 2005, accounted for about 16 percent of the increase of domiciled RRGs in 2010. As of 2010, 24 states had domiciled RRGs. RRGs may decide to domicile in a particular state for one or more reasons. First, RRGs are more likely to domicile in a state that permits their formation as a captive, which may not be one of the states in which the RRGs write the majority of their business. Some states allow RRGs to be chartered as captives because they only provide coverage to their owners and do not sell insurance to the public. Further, regulatory requirements for captive insurers generally are less restrictive than those for traditional insurers. According to the Risk Retention Reporter, about 20 states charter and regulate RRGs under captive legislation. Second, according to NAIC officials with whom we spoke, states that allow RRGs to operate under captive laws often have less stringent financial requirements. NAIC officials also said that RRGs tend to gravitate to states that have lower capitalization requirements and in which the regulators are looking to promote the RRG industry as a source of revenue for the state. Finally, according to 9 of 13 state insurance regulators we interviewed, in addition to lower minimum capital and surplus requirements, RRGs may choose to domicile in certain states because of the state’s expertise with regulating RRGs and knowledge of the industry. Evidence from our interviews and survey of state insurance regulators also suggests that lower capitalization requirements were a factor in RRGs choosing to domicile in those states. For example, in our interviews with insurance regulators representing 8 of the top 10 domiciliary states, 4 regulators reported that the minimum amount of capital required to domicile in their state was $500,000, 3 regulators reported a minimum requirement of $1 million, and 1 regulator reported $400,000. However, six of the regulators also reported that additional capital could be required. Our interviews and state regulator survey also indicated that two domiciliary states reduced their minimum capital and surplus requirement since our 2005 report. For example, one domiciliary state’s minimum capital requirement decreased from $500,000 to $400,000, while another state’s decreased from $700,000 to $500,000. While RRGs tend to domicile in a few states, they operate and write business in all 50 states and the District of Columbia (see fig. 10). Collectively, between 2004 and 2010, the number of operating RRGs increased by about 50 percent. NAIC data also show that more than half of the RRGs in both 2004 and 2010 wrote premiums in two or fewer states, and two-thirds of the RRGs wrote premiums in fewer than 10 states in both years. Of all the direct premiums written by RRGs, about 97 percent and 95 percent were written outside the state of domicile in 2004 and 2010, respectively (see fig. 11). The nondomiciliary states in which RRGs wrote most of their business in 2004 were Pennsylvania ($308 million), New York ($226 million), California ($210 million) and Massachusetts ($114 million). In 2010, RRGs again wrote the majority of their business in these states: $369 million in Pennsylvania, $366 million in New York, $230 million in California, and $172 million in Massachusetts. In 2005, we noted that, according to NAIC, 73 of 115 RRGs active in 2003 (63 percent) did not write any business in their state of domicile. According to data from NAIC, 168 of the 249 RRGs active in 2010 (67 percent) did not write any business in their state of domicile. Nondomiciliary state insurance regulators we interviewed expressed concerns about the amount of RRG business in their states and their limited authority to regulate RRGs providing coverage to their state’s insureds. In our 2005 report, some nondomiciliary regulators expressed concerns that domiciliary states were lowering their regulatory standards to attract RRGs for economic development purposes. Similarly, NAIC officials we interviewed said that when RRGs write the majority of their business outside their state of domicile, the domiciliary state regulator does not have “skin in the game” and cannot protect insureds who might be affected if an RRG became insolvent. According to an NAIC official, these states may allow actions that RRGs find favorable, but that are not in the best interest of the insureds. Based on our interviews and survey of state insurance regulators, RRG industry participants had different views about the effects RRGs have had on the availability and, to a lesser extent, the affordability of commercial liability insurance. RRG representatives with whom we spoke generally believed that RRGs have increased the availability of such insurance. According to industry participants, RRGs have been providing coverage in niche markets in which consumers otherwise might not be able to obtain insurance (that is, from traditional insurers). However, one insurance regulator with whom we spoke said that commercial liability insurance has been readily available through traditional insurers, and therefore questioned the need for mechanisms such as RRGs to obtain this type of insurance. Our survey of state insurance regulators further suggests that regulators generally had different views than RRG representatives about the impact of RRGs on availability. In our survey, 17 out of the 49 state insurance regulators who responded (35 percent) said that RRGs have expanded the availability of commercial liability insurance for groups that would otherwise have difficulty obtaining coverage. Conversely, 8 of the regulators (16 percent) responded that RRGs have not expanded availability, while 24 regulators (49 percent) did not have an opinion. Industry participants were unsure of the impact of RRGs on the affordability of commercial liability insurance. Some industry participants with whom we spoke said that RRGs would not continue to exist if their rates were not affordable. Other industry participants said that it was difficult for them to assess the impact of RRGs on affordability, but acknowledged that RRGs played a role in the insurance market. NAIC officials with whom we spoke said that the affordability of rates offered by RRGs has not been determined, as RRGs are not required to file their premium rates with nondomiciliary state regulators. Therefore, an analysis has not been conducted to compare RRG rates to those of traditional insurers. In addition, an actuarial expert with whom we spoke said that the rates and language included in each policy written by traditional commercial insurers and by RRGs would need to be obtained to make a true comparison, because this information differs among insurers and among RRGs. In our survey, 13 of 48 respondents (27 percent) said that RRGs have improved affordability of commercial liability insurance for groups that would otherwise have difficulty obtaining coverage. Nine regulators (19 percent) responded that RRGs have not improved affordability while 27 regulators (54 percent) did not have an opinion. Apart from the submission of required documentation, LRRA does not provide for a specific process for RRGs to register to conduct business in nondomiciliary states. States and RRGs have disagreed on issues relating to registration such as the level of documentation required and review and approval processes. Interpretations about what documentation can be required vary by state. Based on our analysis of interview and survey responses, some RRG industry representatives and state insurance regulators interpreted LRRA’s failure to mention registration as an indication that submission of the specified documents in LRRA is all that can be required by a nondomiciliary state before allowing an RRG to operate in that state. Others interpreted LRRA’s silence on registration in nondomiciliary states to mean that states can impose their own requirements. Responses to our survey of state insurance regulators indicate that states have varying registration requirements and practices, but respondents generally reported that RRGs must submit required documentation as outlined in LRRA. However, regulators also provided information on additional information and documentation their states required to fulfill individual state registration processes. For example, a few states will accept NAIC’s uniform application form, while another state requires a state-specific registration form. An RRG representative with whom we spoke said that using NAIC’s uniform application form instead of state-specific forms would simplify the registration process and make it more beneficial to RRGs. An RRG representative said that one state requires a listing of all other states in which the RRG is registering and the status of the registration in each state; copies of any condition or contingencies placed on the RRG by its domiciliary state; copies of requirements or restrictions placed on RRG members; copies of soliciting and marketing materials including membership and subscription agreements; and projected premiums for the next 3 years for the state in which the RRG is applying as well as nationwide, among other requirements. According to another RRG representative, one nondomiciliary state requires specific forms for biographical affidavits of officers and directors, including Social Security numbers. In documentation from state insurance regulators that we received from an RRG industry association, as a part of the registration process one state required the name, physical address and mailing address of all agents or brokers for the RRG, and a copy of each examination of the RRG, among other requirements. Representatives from the RRG industry maintain that state regulatory practices such as registration requirements beyond what is specified in LRRA “encroached” on LRRA’s partial preemption of state insurance laws. RRG representatives said that there is a fear among RRGs that repeated objections to states’ requests for information will lead to RRGs being targeted by state insurance offices. They also feared that providing information would lead to more onerous requests. However, one state insurance regulator with whom we spoke said that the additional document requests were intended to provide the regulators with necessary information to understand the operations of the RRGs providing coverage in their states. Further, the regulator stated that information requested is often the same information provided to the domiciliary state regulator and that domiciliary regulators may be slow to send the information or sometimes may not provide it. Two state insurance regulators said that sometimes the information requested is subject to a confidentiality agreement between the state and the RRG, which makes it challenging for regulators to share information. To alleviate this issue, one state insurance regulator suggested developing a mechanism that would allow for a central repository of RRG financial data for information-sharing purposes. States and RRGs also have disagreed about registration and approval processes. While some states require certain information in order to approve RRGs’ registrations, RRG representatives with whom we spoke said that LRRA does not require RRGs to go through a regulatory review and approval process by state regulators to conduct business in nondomiciliary states. In 2009, the Risk Retention Reporter surveyed captive managers representing 260 RRGs to determine whether nondomiciliary states were “encroaching” on LRRA preemptions. In the 2009 survey, 44 percent of RRGs responded that states made operation contingent upon regulatory review and approval, while 56 percent found that states did not. Also in the 2009 survey, 47 percent of respondents said they were subject to “impermissible” requests for information, while 53 percent said that they were not subject to such requests. RRG representatives with whom we spoke said that even after completing the registration process for some nondomiciliary states, the RRG still may not be recognized as registered, or such recognition may take several years. For example, according to an RRG representative with whom we spoke, an RRG sent a letter to a nondomiciliary state in May 2006 with notification of its intent to do business. The RRG did not receive a letter approving its registration until April 2008. Another RRG representative said that an RRG filed the documents required by LRRA to register in about 40 states. About one-third of the states responded affirmatively to the submissions for this RRG without any further questions. Another one-third of states responded to the RRG with additional questions before allowing the RRG to conduct business in those states. The remaining states did not respond to the RRG’s registration filings. Some states have mandatory waiting periods before a traditional insurer, domiciled RRG, or nondomiciled RRG can begin writing business in their state. In our survey of state insurance regulators, 3 of 49 states reported having such a waiting period. However, the waiting period can be longer for traditional insurers and domiciled RRGs than for nondomiciled RRGs. For example, one state reported that its mandatory waiting period for traditional insurers and domiciled RRGs was 90 to 120 days, and 15 to 30 days for nondomiciled RRGs. Another state did not have a minimum or maximum waiting period, but traditional insurers and domiciled RRGs could not write business until their state issued a license, and the waiting period for nondomiciled RRGs to begin writing business in the state was 60 days. A third state reported no waiting period for traditional insurers and domiciled RRGs and a waiting period of 30 to 60 days for nondomiciled RRGs. NAIC has not taken a position on the legality or utility of different state approaches to the interpretation of LRRA or state regulation of RRG activities. NAIC published its Risk Retention and Purchasing Group Handbook in 1999 to provide guidance to domiciliary states that have adopted NAIC’s Model Risk Retention Act. The purpose of the handbook is to present advisory information on issues that have arisen or can be expected to arise when regulating RRGs under LRRA. For example, while the handbook provides information on the notice and registration process for nondomiciliary states, it does not take a position on different state approaches. As a result of state regulators’ varying interpretations of LRRA, registration requirements may differ across states. As previously noted, some RRGs believe that some states have registration requirements that go beyond what is allowed under LRRA, and in some cases, these requirements have caused delays in an RRG’s ability to begin operating in those states. Conversely, some state regulators believe such requirements are necessary as well as allowable under LRRA. These differing interpretations have resulted in an environment of uncertainty for both RRGs and regulators and, according to RRGs, are a potential regulatory burden not intended by LRRA. LRRA allows nondomiciliary states to require RRGs to pay premium and other taxes but does not explicitly state whether nondomiciliary insurance regulators can or cannot charge fees. The silence of LRRA on fees has prompted state insurance regulators and RRG representatives to interpret the law differently. Both domiciliary and nondomiciliary state insurance regulators routinely charge RRGs one-time registration fees, annual renewal fees, and filing fees. Based on our survey of state insurance regulators, the amount of fees charged varies across states and may differ based on whether the RRG is domiciled in the state. Among the respondents, most reported that they charged RRGs (domiciled and nondomiciled) initial and annual fees to operate in their state. Specifically, among the 37 states identifying specific fees charged to insurers, most reported that they charged RRGs with some of the same types of fees applicable to traditional property/casualty insurers. In addition, the responses indicated that premium taxes—which LRRA specifically authorizes—vary across states and in some cases have a complex structure. For example, premium tax rates may be different for domiciled or nondomiciled RRGs or for traditional property/casualty insurers. In addition, a few states reported incremental tax rates based on the volume of premiums written by the RRG. Further, some states implement a “retaliatory” premium tax rate—meaning a state taxes out-of- state insurance companies operating in its jurisdiction in the same way that the state’s own insurance companies are taxed by other states. A majority of RRG representatives with whom we spoke said that varying fees other than premium taxes that nondomiciliary states charged RRGs were expensive and a financial burden and were also inconsistent with LRRA. For example, one RRG representative said that the insurer, which operates in 50 states and the District of Columbia with total national premiums of $124 million, paid in excess of $500,000 in combined state fees to conduct business outside its domiciliary state. A smaller RRG that wrote premiums of about $1 million said it paid $6,000 to $7,000 in additional fees. Three RRG representatives said that their RRGs often “pay fees under protest,” while other RRG representatives said that they often paid the fees because paying was less expensive than litigation against the states. RRGs have challenged requirements established by nondomiciliary states that RRGs believe are preempted, and therefore not permitted, by LRRA. For example, in National Risk Retention Association v. Brown, a U.S. district court found that LRRA does not authorize a nondomiciliary state to require RRGs domiciled in another state to pay annual, application, or policy form review fees as part of registration or examination requirements before being allowed to do business in that state. However, the court did not hold that all fees nondomiciliary states charged necessarily were prohibited but that the types of fees charged in that case were broader than those allowed by the registration and examination requirements enumerated in LRRA. In Attorneys’ Liability Assurance Society, Inc. v. Fitzgerald the court also addressed the issue of fees. In that case, a state statute required nondomiciled RRGs to pay a fee of a certain percentage of their business written in that state. The court held that such a fee was not permitted, as LRRA permits only taxes by nondomiciliary states, and such a fee was not considered a tax. The fee in this case was to be used for regulatory purposes only, and therefore was considered an impermissible attempt to regulate an RRG by a nondomiciliary state. As a result of differing interpretations of LRRA, fee structures vary across states. While some RRGs believe some of these fees go beyond what is allowed by LRRA, state regulators believe these fees are permissible. While the impact on RRGs of fees charged in some states is not clear, several RRG industry participants with whom we spoke said that fees may be more challenging for smaller RRGs and RRGs operating in multiple states. In addition, this variation of fees across states also contributes to the uncertainty under which RRGs and state regulators operate. LRRA allows RRGs to provide commercial liability insurance and provides a general definition of liability. However, beyond its general definition, LRRA is silent on the specific types of liability insurance that RRGs can provide, which has resulted in differences of interpretation by RRGs and state insurance regulators about the types of liability coverage permitted under LRRA. In our survey of state insurance regulators, 6 of 49 regulators responded that they had between one and five differences of interpretation with other state insurance regulators about the definition of commercial liability insurance in the last 24 months. One regulator reported more than 10 differences. In our interviews, five regulators said they believed that insurance lines such as contractual liability (for example, vehicle service or builder warranties) and stop-loss coverage were not permitted under LRRA. In some cases, nondomiciliary state insurance regulators have not allowed RRGs to provide insurance in their state that they believe does not fit the definition of liability under LRRA. For example, one nondomiciliary state regulator said it denied the registration of an RRG that planned to offer contractual liability insurance. In addition, an RRG representative reported its registration application was denied in five states because regulators did not believe contractual liability coverage fell within the definition of liability in LRRA. Further, one domiciliary state insurance regulator we interviewed said the state believed contractual liability coverage was permitted under LRRA; however, the state generally did not allow this coverage to be offered in the state to “avoid the politics of the issue.” This regulator, along with three other regulators with whom we spoke, said that the RRG industry needed a clearer definition of contractual liability or the types of coverage permissible under LRRA. Differences in interpretation of the types of coverage permitted under LRRA have led to litigation between states and RRGs. States and federal courts also have differed in their interpretations. For example, in Auto Dealers RRG v. Steve Poizner, an RRG provided stop- loss insurance that covered liability by its members, employees of California automobile dealers that maintained self-funded employee benefit plans. The California insurance office issued a cease-and-desist order because it believed that the RRG was providing health insurance, not liability insurance as defined by LRRA. The RRG challenged the California insurance office’s cease-and-desist order in federal court, and the court issued a preliminary injunction blocking the cease-and-desist order. However, the court never decided the case on its merits—that is, the court never decided whether the RRG was issuing valid liability insurance policies—because the RRG decided to stop pursuing the case and instead stopped issuing policies in California. In Attorneys’ Liability Assurance Society, Inc. v. Fitzgerald (discussed previously), the court held that LRRA permitted an RRG to cover liability by its members for wrongful employment practices. The court held that while RRGs specifically were not to cover workers’ compensation, the types of coverage provided by the RRG at issue in the case were permissible under the broad scope of LRRA. Federal courts have rendered varying decisions relating to what is considered prohibited discrimination per LRRA’s state financial responsibility requirements. These financial responsibility requirements consist of state or local provisions that establish conditions for obtaining a license or undertaking certain activities. For example, many states require that anyone registering a motor vehicle demonstrate proof of financial responsibility (show that the owner of the vehicle has financial means sufficient to compensate any injured persons). State laws may provide that financial responsibility can be shown by coverage in a liability insurance policy by an insurer that is regulated by the state and protected by the state’s guaranty fund. LRRA does not preempt state authority to apply financial responsibility standards as long as those standards do not discriminate against RRGs within the meaning of LRRA. For example, in National Warranty Insurance Company RRG v. Greenfield, the U.S. Court of Appeals for the Ninth Circuit held that LRRA preempted provisions of the Oregon Service Contract Act that required automobile dealers to obtain liability insurance from an insurer that was a member of the Oregon Insurance Guaranty Association. Because RRGs do not participate in state guaranty associations, the Oregon law effectively excluded RRGs from providing liability insurance to automobile dealers. Thus, the court held that Oregon could not exclude coverage from all RRGs because that would discriminate against RRGs. However, Oregon could exclude coverage from a particular RRG if it could show that the RRG was financially unsound or otherwise dangerous to those who relied on insurance purchased pursuant to the Oregon Service Contract Act. In another case, Charter Risk Retention Group Insurance Company v. Rolka, a U.S. district court noted similarly that discrimination against RRGs as a whole is prohibited under LRRA. However, state laws relating specifically to financial responsibility requirements could be valid, if they caused a particular RRG to be excluded if it lacked acceptable evidence of financial responsibility for a state license or permit, as long as they did not discriminate against RRGs as a whole. Other courts have interpreted the provisions of LRRA prohibiting discrimination against RRGs differently. In Ophthamalic Mutual Insurance Company v. Musser, the U.S. Court of Appeals for the Seventh Circuit affirmed a district court decision that LRRA does not preempt a Wisconsin requirement that health providers offer proof of financial responsibility to do business in the state by obtaining professional liability insurance coverage from insurers authorized to do business in Wisconsin, although it effectively excludes nondomiciliary RRGs from operating in that state. The court found that the challenged statute neither impermissibly regulated RRGs nor was intended to discriminate against them, and therefore is not preempted by LRRA. The court concluded that the Wisconsin requirement fit within the saving clause of LRRA providing that states are not bound by LRRA when crafting statutes concerning financial responsibility, as long as the statutes were not intended to discriminate against RRGs. Similarly, in Mears Transport Group v. State, the U.S. Court of Appeals for the Eleventh Circuit held that LRRA did not preempt a Florida law requiring owners or operators of for-hire passenger transportation vehicles to provide evidence of financial responsibility by having a motor vehicle liability policy issued by an insurer that is a member of the Florida Insurance Guaranty Association. Although RRGs effectively are disallowed from doing business in Florida under this law, as they are not permitted to be members of guaranty associations under LRRA, the court held that the Florida law does not single out RRGs for exclusion, as RRGs are one of many types of insurance carriers ineligible for membership in the guaranty association. Therefore, the court held that the Florida law was not intended to be discriminatory. Since the Florida law is “precisely the type of state financial responsibility law that Congress expressly exempted from the preemption provisions of LRRA,” according to the court, it is allowed and not preempted by LRRA. Different interpretations of the types of coverage permitted under LRRA have resulted in the inability of some RRGs to provide coverage in certain states. And, in cases in which RRGs choose to pursue legal action when states deny their ability to provide that coverage, the RRGs may incur substantial legal fees. As previously noted, different interpretations by federal courts on issues such as permissible coverage types and what constitutes discrimination under LRRA can further contribute to an uncertain regulatory environment for RRGs and state insurance regulators. Because LRRA does not comprehensively address the capitalization or solvency requirements of RRGs, states can develop their own statutory minimum capital and surplus requirements for RRGs domiciled in their state. According to some state insurance regulators with whom we spoke, these requirements are based on the type of insurance coverage offered, the volume of business the RRG intends to write, and other factors. Two nondomiciliary state insurance regulators with whom we spoke indicated concerns about the capitalization and solvency of RRGs operating in their states, and two regulators support increasing the minimum capital requirement. In addition, some states allow RRGs, unlike traditional insurers, to meet and maintain their minimum capital and surplus requirements in the form of an irrevocable letter of credit rather than cash. Data from NAIC show that as of June 2010, 62 RRGs were capitalized with letters of credit. Although RRGs write most of their business outside their state of domicile, nondomiciliary state insurance regulators must rely on domiciliary regulators to establish minimum capitalization and solvency requirements for their domiciled RRGs—and ensure that the requirements are commensurate with the type of coverage provided and the volume of premiums written. Some RRG representatives with whom we spoke believed that there is a lack of confidence in the RRG regulatory environment or that some states prefer their own authority to regulate RRGs writing business in their state. Two state insurance regulators and four RRG representatives said they believed that some of these issues will be resolved through NAIC’s efforts to develop uniform, baseline standards for the regulation of RRGs. Our 2005 report found that the wide variance in solvency regulation among domiciliary states, along with the growth of the RRG industry, increased the potential for future solvency risks. In response to recommendations from our 2005 report to provide a more uniform regulatory environment for RRGs, NAIC revised its accreditation standards to include standards for the way in which states regulate RRG solvency. These new standards went into effect on January 1, 2011. NAIC also began to address our recommendations to develop corporate governance standards concerning ownership and operational issues within RRGs. Initial discussions in 2005 led to the development of draft corporate governance standards by 2007, and later in 2010 NAIC working groups initiated steps toward integrating these standards into the RRG oversight process. In addition, NAIC has started the process to integrate corporate governance standards into its accreditation standards, so that states would be required to review RRG’s corporate governance standards to be accredited. The groups’ discussions were open to interested parties, including RRG representatives. For instance, the National Risk Retention Association (NRRA), told us it actively participated in NAIC working groups. The revisions to the financial accreditation standards for state insurance departments’ oversight of domiciled RRGs more closely align the standards applied to the oversight of RRGs with those that are applied to traditional insurers. The revisions affect key areas of RRGs’ financial solvency oversight, including revising accounting requirements for annual financial reporting and making financial examinations risk-focused. Among the recent revisions to NAIC’s accreditation standards is a new requirement that applies to RRGs that do not file their annual financial statements using SAP: these statements must contain a reconciliation to SAP, effective January 1, 2011. According to NAIC, in 2010, 72 RRGs reported filing their financial statement using SAP and 177 reported using another accounting principle such as GAAP. The reconciliation is designed to indicate to regulators how the accounting principles used in financial statements result in figures different from those that SAP would have produced. RRGs can include this reconciliation in the footnotes to the financial statement. This new standard aims to address some of the challenges identified in our 2005 report that arose from the use of different accounting principles, such as difficulties in assessing the financial condition of RRGs reported by some nondomiciliary state insurance regulators more accustomed to SAP. The new standards also move financial reporting requirements for RRGs closer to those of traditional insurers. Our survey responses from state insurance regulators showed that 32 state regulators reported requiring SAP for financial reporting, 14 reported requiring GAAP or a modified version of GAAP and 3 reported a choice of accounting principles within their requirements. Financial reporting practices for RRGs still vary and the choice of accounting method can produce different conclusions about a company’s financial strength. NAIC analysts continue to report that allowing financial statements using different accounting principles, even when reconciled to SAP in the footnotes, diminishes the usefulness of their underlying data and analysis tools because the tools were designed around data extracted from financial statements based on SAP. Statements filed using other accounting principles can produce distorted results when looked at through traditional computerized analysis tools. As a result, NAIC must then revise the analyses to produce information useful to state regulators, which requires more staff resources. In prior reports we have noted that NAIC’s solvency analysis is an important supplement to the overall solvency monitoring performed by states and can help states focus their examination resources on potentially troubled companies, including flagging financial ratios that are outside the usual range for additional regulatory attention. Further, the choice of accounting method can have important repercussions for certain RRGs. For example, representatives of two RRGs with whom we spoke reported letters of credit to be critical for some RRGs to meet minimum capitalization requirements; and as a result, they often preferred to file their financial statements using GAAP where letters of credit can, in some states, improve the RRG’s appearance of financial solvency. The revised accreditation standards also require all RRGs to have risk- focused examinations in an effort to implement more uniform baseline standards for RRG regulation, applicable to all financial examinations of RRGs commencing on or after January 1, 2011. Risk-focused examinations emphasize reviews of higher-risk areas and tend to be more specialized and tailored to individual companies. Risk-focused examinations are already a regulatory requirement for traditional insurers. Nondomiciliary states have the right to review the results of these examinations for RRGs. Three representatives of RRGs with whom we spoke supported the move to risk-focused examinations because they believed more uniform regulatory activities among domiciliary states would result in more trust among state regulators and ultimately would benefit the RRG industry. However, six representatives also acknowledged some potential challenges in implementing risk-focused examinations for some RRGs, particularly the smaller ones. For example, they said it could increase financial costs and regulatory burden for these RRGs because state regulators might need to hire more specialized auditors for more detailed reviews, and pass on the associated costs to the RRGs in the form of examination fees. NRRA also expressed its concern about the efficiency and effectiveness of risk-focused examinations for small liability insurance companies, which compose the majority of RRGs. NRRA characterized the impact on small RRGs as excessively expensive without yielding commensurate benefit, and held that implementing risk-focused examinations for small RRGs would run counter to the intent of LRRA. In its letter to NAIC, NRRA questioned the cost-effectiveness of the more rigorous examinations for certain RRGs based on characteristics such as the RRG’s size, its impact in nondomiciliary states, and the structure of its membership. Four state insurance regulators with whom we spoke also said that requiring risk-focused examinations might not be an efficient use of resources, particularly for small RRGs that represent the majority of the RRG population. Three state insurance departments we interviewed reported having already implemented risk-focused examinations for their domiciled RRGs. Based on its experience conducting risk-focused examinations, one domiciliary state regulator recommended that criteria be used to determine the efficiency and effectiveness of applying a risk- focused examination to an RRG. For example, the regulator recommended that risk-focused examinations should be required for RRGs with more than $10 million in direct written premiums, owned and operated by a group of shareholders with unrestricted membership, and registered to operate in at least 15 states. Alternatively, the regulator suggested leaving it at the discretion of the domiciliary state regulator to decide whether the risk-focused approach would be the most efficient approach to oversee a particular RRG. According to NAIC officials, the possibility of exempting certain types of RRGs from the risk-focused examination requirement was considered in working groups. However, they also expressed concern about whether alternative examinations would qualify as full-scope examinations in accordance with NAIC’s guidance on examinations as outlined in the Financial Condition Examiners Handbook. The guidance requires RRGs to undergo full-scope examinations at least once every 5 years, or in accordance with the respective state law if it requires more frequent examinations. NAIC decided that the risk-focused examination process was flexible enough to allow examiners to tailor examinations to fit the unique characteristics of RRGs. NAIC’s risk-based capital (RBC) system was created to provide a capital adequacy standard for traditional insurers that creates a financial safety net, is uniform among the states, and provides regulatory authority for timely action. The RBC formulas can be technical and involve a number of components. Each of the primary insurance types—such as property/casualty, life or health—has a separate RBC formula that emphasizes the material risks common for that particular insurance type. Regulatory actions may be triggered by the RBC calculation for an insurer, and actions may include requiring the insurance company to issue comprehensive financial plans, issue corrective orders, or authorize the take-over of the insurer. NAIC officials said that they are pursuing the use of RBC calculations in the oversight of RRGs as part of the accreditation process. While regulators may voluntarily include RBC calculations in the financial examinations of RRGs, these calculations are not specifically required. According to NAIC officials, it is expected that RBC will be incorporated into the accreditation standards. If incorporated into the accreditation standards, regulators would be expected to incorporate RBC calculations into their broader financial analyses to determine whether any actions would be necessary, although, unlike traditional insurers, the RBC calculations for RRGs do not automatically trigger regulatory actions. While five RRG representatives we interviewed generally supported NAIC’s revisions to the accreditation standards in the RRG industry, three representatives expressed some concern about how meaningful an RBC analysis would be as a requirement in RRG oversight. For example, they said that the use of RBC for small RRGs, which tend to use GAAP accounting and rely more heavily on letters of credit to meet their capitalization requirements, might not be useful. Five state insurance regulators we interviewed were also unsure of the usefulness of incorporating RBC calculations into the accreditation standards, particularly in situations in which otherwise healthy RRGs could fare poorly when RBC calculations were applied. An actuarial expert we interviewed also expressed concern that an RBC requirement could lead to an overemphasis on the RBC figures for regulators and undue pressure on otherwise sound RRGs to increase capital. According to an NAIC working group analysis, an RBC formula using figures based on GAAP could result in different numbers compared with RBC calculations using figures based on SAP, potentially changing the picture of that RRG’s financial condition. However, the working group also said that using figures determined under GAAP might not unreasonably alter the RBC conclusions for most RRGs and still could be meaningful. In response to our 2005 recommendations to establish minimum corporate governance standards for the RRG industry, NAIC developed such standards for RRGs but has not yet implemented them with a model act or through their accreditation standards. NAIC officials reported that they expect these corporate governance standards to be incorporated into the Model Risk Retention Act by the end of 2011. Further, the officials said that in 2012 they will consider adopting corporate governance standards as part of the accreditation standards. An RRG is often operated by a management company or another service provider that generally supplies key services. However, the potential for abuse arises if the interests of a management company are not aligned with the interests of the RRG insureds to achieve long-term solvency and obtain self-insurance at an affordable price. In our 2005 report, we found behavior suggesting that management companies and affiliated service providers promoted their own interests at the expense of the RRG insureds in 10 of the 16 cases of RRG failures we examined. LRRA includes no provisions for governance controls that could help mitigate the risk to RRG insureds from potential abuses by other interests, such as their management companies, should these companies choose not to operate in the best interest of RRG insureds. In response to GAO’s recommendations, NAIC’s working groups have included corporate governance standards as part of their efforts to develop uniform baseline standards for RRGs. NAIC first adopted corporate governance standards for RRGs in June 2007 as a separate stand-alone guidance that was not incorporated into the accreditation standards. As of October 2011, the revisions to the Model Risk Retention Act that include corporate governance standards had been reviewed, but not yet approved, by the NAIC member states. As of November 2011 these revisions were approved and NAIC had adopted corporate governance standards into the Model Risk Retention Act. However, corporate governance standards are not yet a part of the accreditation standards and NAIC officials said that they will not begin discussions on adopting these standards into the accreditation requirements until 2012. Three state insurance regulators with whom we spoke expressed support for corporate governance standards as a requirement for RRGs because they felt it would improve transparency of the management of RRGs. While five regulators generally did not think implementing corporate governance standards would be burdensome for RRGs, one regulator did expect that some RRGs, depending on their size, could find the implementation of some standards, such as the requirement for an audit committee, to be a challenge. Representatives of two large RRGs with whom we spoke supported corporate governance standards as good business practice. However, four representatives of RRGs also expressed concern about the cost of implementing these standards for smaller RRGs, particularly those without their own internal counsel. Recent federal legislative proposals to amend LRRA, if passed, would offer new options to RRGs. One proposed change would expand the type of insurance RRGs may provide to include commercial property coverage. RRG representatives with whom we spoke generally favored amending LRRA to allow RRGs to provide commercial property insurance coverage. For example, one representative said the differences in the risk profile between commercial property coverage and commercial liability coverage is a potential opportunity to manage their risks more strategically. In addition, six RRG representatives we interviewed felt that allowing coverage of commercial property insurance constituted a removal of restrictions on providing insurance products that could be a natural extension of their core line of business. For example, an RRG that offers professional liability coverage to dentists currently cannot underwrite coverage for dental equipment. Similarly, one representative of an RRG offering commercial liability insurance products to the construction business said that the RRG could not offer property insurance related to the same homes constructed under their insurance coverage. Another RRG representative said their clients would like the option to bundle their property coverage with a wide range of specialized insurance products they purchase from the RRG for both convenience and cost- effectiveness. Eight RRG representatives we interviewed were concerned that some RRGs entering the commercial property market might not have adequate capital to cover the potentially severe losses that are a part of that line of coverage. Four RRG representatives also said that they would expect the domiciliary state regulator to review any changes to an RRG’s business plan to ensure that it had an appropriate capital base for its underwriting coverage and risk profile. Ten regulators with whom we spoke expressed concerns about RRGs entering the commercial property insurance market because of the potential risks to owner/insureds and consumers. For example, six regulators expressed concern that if an RRG was unable to pay the potentially severe losses associated with some lines of property insurance, the RRG members could be at financial risk. RRGs cannot participate in state guaranty funds that otherwise could help pay losses in such cases. In our survey of state insurance regulators, we asked whether they thought LRRA should be amended to enable RRGs to provide commercial property insurance. Among the responses, 32 regulators did not think LRRA should be so amended while 5 thought LRRA should be amended to allow RRGs to provide property insurance. Three of the five regulators that favored amending LRRA in this way were from the 10 states with the highest RRG gross premiums in 2010. The proposed legislation also would grant authority to a federal entity, such as the recently created Federal Insurance Office in the Department of the Treasury, to oversee state compliance with the regulatory preemptions in LRRA. For example, the office would resolve disagreements about whether LRRA preempts any regulatory actions by a state. Among the state insurance regulators we surveyed, 29 said that the federal government should not have a primary role in arbitrating disputes between state regulators and RRGs, while 6 said that the federal government should have a primary role. We also asked regulators which department or agency they thought should have this authority if the federal government were to arbitrate disputes between states and RRGs. Twenty-nine regulators responded with no opinion, while 13 regulators indicated their preference for the Federal Insurance Office and 6 regulators indicated other agencies including the Department of Commerce. Another proposed change would have the Federal Insurance Office issue corporate governance standards for RRGs that would preempt any corporate governance standards under state laws. Five state regulators with whom we spoke also favored developing an arbitration mechanism, while five regulators did not think corporate governance standards would be burdensome for RRGs to implement. While seven RRG representatives we interviewed generally supported establishing a federal arbitration mechanism as a more efficient and cost-effective way of resolving disputes, four representatives also expressed concern about potential encroachment into state regulatory activities by a federal entity. In establishing the Liability Risk Retention Act, Congress allowed RRGs to provide commercial liability insurance to RRG members and established a lead-state regulatory framework. While constituting a small portion of the total liability insurance market, the amount of premiums written by RRGs increased from 2004 to 2010 and the financial condition of the RRG industry generally has remained profitable during this same period. Based on our analysis, RRGs appear to have maintained a relatively consistent presence in the market, primarily providing coverage in niche markets such as medical professional liability insurance and other health care-related insurance lines. RRGs have continued to domicile in one of a few states but write most of their business in other states, highlighting the importance of LRRA’s provisions governing the rights and actions available to regulators in nondomiciliary states as well as the types of coverage allowed under LRRA. However, states have interpreted these provisions differently, due in part to LRRA’s silence on certain issues such as registration requirements, fees, and the types of insurance coverage RRGs can write, sometimes resulting in litigation between state insurance regulators and RRGs. In addition, some federal courts to which these disputes have been brought also have interpreted LRRA differently. As a result, RRGs and state insurance regulators have continued to operate in an environment with some uncertainty, potentially affecting RRGs’ operations as well as the ability of state regulators to take actions deemed necessary to protect insureds in their states. To establish a more consistent regulatory environment for the members of RRGs and their claimants, our previous report recommended the development of broad-based, uniform, baseline standards for the regulation of RRGs. NAIC has made progress addressing these concerns, including requiring accredited states to implement risk-focused examinations and risk-based capital analyses, as well as developing corporate governance standards for the RRG industry. Further, NAIC has made efforts to more closely align the accreditation standards for RRGs with those of traditional insurance companies. Because some of these standards only recently were implemented or have not yet been implemented, it is too early to evaluate their effect on the RRG industry and its regulation. To reduce the varying interpretations of LRRA, which have led to uncertainty and disagreements among RRGs and state insurance regulators, and at the same time continue to facilitate the formation and efficient operation of RRGs, Congress should consider clarifying certain LRRA provisions. For example, clarifying whether (1) RRG registration requirements beyond those currently specified in LRRA are permitted in nondomiciliary states and (2) fees in addition to premium and other taxes could be charged to RRGs by nondomiciliary states in which they operate. Congress also should consider providing a more specific definition of the types of insurance coverage permitted under LRRA. We requested comments on a draft of this report from the National Association of Insurance Commissioners. NAIC provided written comments, which are reproduced in full in appendix II. NAIC also provided technical comments, which we incorporated as appropriate. NAIC agreed that Congress should consider the merits of clarifying certain aspects of LRRA, in particular by providing more specific definitions of the type of insurance coverage permitted under the LRRA. NAIC further recommended that the definition of “commercial liability insurance” be included for consideration since disagreements concerning the scope of this definition have led to disputes between the states and RRGs that, without further clarification, may continue. NAIC also provided several additional comments.  NAIC provided clarification regarding the status of their Risk Based Capital Models (RBC) and corporate governance standards as it relates to NAIC’s accreditation standards for RRGs, which we incorporated into the draft.  NAIC expressed concern with the methodology we used to calculate the annual average ratios in figures 3 and 4, and suggested we either use an alternate methodology or more clearly describe the one we used. We added a more detailed description of our methodology to each of the figures.  NAIC clarified that when analyzing the ratio of premiums to policyholder surplus, whether or not a state allows a letter of credit as an admitted asset can change the results of such an analysis. We agree and added an explanatory footnote. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its date of issue. At that time, we will send copies of this report to the Chairman and Ranking Member of the Senate Committee on Banking, Housing and Urban Affairs; the Chairman and Ranking Member of the House Financial Services Committee; the Ranking Member of the Subcommittee on Oversight and Investigations, House Financial Services; and to the Chief Executive Officer of NAIC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7022 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to (1) describe changes in the financial condition of the risk retention group (RRG) industry from 2004–2010; (2) examine the regulatory treatment of RRGs across domiciliary and nondomiciliary states; and (3) examine changes to federal and state regulatory practices regarding RRGs since 2004. To determine the extent to which the financial condition of the RRG industry has changed since 2004, we examined previous GAO reports, various financial indicators from data provided by the National Association of Insurance Commissioners (NAIC) and the Risk Retention Reporter, a trade journal and data source for the industry. We interviewed representatives of two industry associations on their members’ regulatory experiences operating in domiciliary and nondomiciliary states. We reviewed correspondence from state insurance regulators to RRG representatives about topics such as registration processes and fees charged to RRGs. NAIC officials calculated the overall market share of RRGs in the commercial liability insurance market for each year during 2004–2010 and the overall market share of RRGs in the medical professional liability line for 2007–2010 only. We examined the amount of premiums written by RRGs and traditional property/casualty insurers for commercial liability insurance in all 50 states, the District of Columbia, the U.S. territories (American Samoa, Guam, Northern Mariana Islands, Puerto Rico, and the Virgin Islands), and Canada. To ensure data were comparable, we limited our analysis to commercial liability lines of insurance that RRGs are allowed write. We examined and analyzed RRG industry data on financial indicators of profitability and ability to pay claims, such as policyholder surplus, return on equity and combined ratio for 2004–2010. To determine the number of RRGs domiciled in and operating by state, and the percentage of direct written premiums written outside the state of domicile, we analyzed information provided by NAIC. To assess the reliability of NAIC data we received, we (1) performed electronic testing for obvious errors in accuracy and completeness; and (2) worked with agency officials to identify any data concerns. When we found discrepancies, such as data that were inconsistent, we notified agency officials and worked with these officials to correct the discrepancies before conducting our analysis. We determined that the data were sufficiently reliable for the purposes of our report. To compare the concentration of RRGs by business area, we used data from 2004 and 2010 from the Risk Retention Reporter. We also obtained data from this source for the number of RRGs licensed by business area. Data from the Risk Retention Reporter were as of April 2010. We did not attempt to verify these data, but did interview officials of the Risk Retention Reporter to discuss their data collection methods. We determined that the data were sufficiently reliable for the purposes of our report. Overall, we used interviews, a Web-based survey, and analysis of the Liability Risk Retention Act of 1986 (LRRA) with other available documentation to determine potential inconsistencies in the regulatory treatment and regulatory environment of RRGs in domiciliary and nondomiciliary states. We reviewed and analyzed LRRA and its legislative history. To determine states’ rules and regulations for RRGs domiciled or operating in those states, we designed and administered a Web-based survey of state insurance regulators in all 50 states and the District of Columbia. Specifically, the survey asked about each state’s (1) requirements for RRGs domiciled in state; (2) role as a host (nondomiciliary) state regulator for RRGs operating in state; (3) applicable fees, taxes, and registration requirements; (4) regulatory experiences such as conducting examinations of, taking administrative actions against, and filing civil or criminal lawsuits against RRGs; and (5) opinions on LRRA. A copy of the questionnaire and results are available in the e- supplement to this report, GAO-12-17SP. The Web-based survey was administered from May 19, 2011, through July 25, 2011. Respondents were sent an e-mail invitation to complete the survey on a GAO Web server using a unique username and password. Throughout the data collection period, nonrespondents received reminder e-mails and telephone calls. The final response rate was 49 out of 51 states including the District of Columbia (96 percent). The practical difficulties of conducting any survey may introduce nonsampling errors, such as difficulties interpreting a particular question, which can introduce unwanted variability into the survey results. We took steps to minimize nonsampling errors by pretesting the questionnaire over the telephone in March and April 2011 with four state insurance regulators (in both domiciliary and nondomiciliary states) and with NAIC officials. We conducted pretests to make sure that the questions were clear and unbiased, the data and information were readily obtainable, and the questionnaire did not place an undue burden on respondents. We made appropriate revisions to the content and format of the questionnaire after the pretests. After the data were collected, we identified unanswered questions and inconsistencies in some responses. We conducted follow- up with the specific states by e-mail and telephone to obtain responses to unanswered survey questions and confirm the accuracy of responses to several key questions, including applicable fees, premium tax rates, waiting periods, and regulatory actions. We received a 100 percent response rate to our follow-up questions and response confirmations. While many of the questions on the 2004 and 2011 surveys are similar, slight differences in wording or question format could result in slightly different responses between the two surveys. All data analysis programs used for this report were independently verified for accuracy. Due to the wide variety of responses to some of our open-ended questions, preparing statistics and summary presentation of findings to these questions was not possible in some cases. Therefore, in some cases we provided qualitative explanations with examples of responses we received. To obtain the information and opinions on the regulatory treatment of RRGs across domiciliary and nondomiciliary states, we interviewed 13 regulators from domiciliary and nondomiciliary states representing a nonstatistical sample of states selected for RRG business activity and perceived differences in their regulatory treatment of RRGs. The nine domiciliary states—Delaware, Florida, Hawaii, Illinois, Montana, Nevada, South Carolina, Vermont, and the District of Columbia—included eight that were among the top 10 states that domiciled the highest number of RRGs or had the highest amounts of written premiums as of December 31, 2010. For states that do not have domiciled RRGs, we identified and selected those in which RRGs were writing the highest amounts of total premiums as of year-end 2010. Those four states were California, Massachusetts, New York and Pennsylvania. Views of other domiciliary and nondomiciliary insurance regulators were obtained through our Web- based survey. To obtain comparable data, the same topics were included in the Web-based survey and in interviews with domiciliary and nondomiciliary state insurance regulators. We also obtained information and opinions on the regulatory treatment of RRGs across states from RRG representatives. First, we conducted two discussion groups at the 2011 annual conference for a captive industry association. We coordinated with the industry association to determine which conference participants had specific knowledge of and were representatives of the RRG industry. To determine which individuals to select to participate in our discussion groups, we developed an invitation letter that the industry association e-mailed to the identified RRG industry representatives. The letter also included a questionnaire to aid in identifying the organization name, title, and industry type of the RRG representative. We received 10 completed questionnaires from conference registrants expressing interest in participating in the discussion groups. Based on the information provided in the questionnaire, we assembled discussion group volunteers into two groups: (1) RRG owner/insureds and (2) captive/RRG managers. For conference attendees who did not respond to the questionnaire by the deadline in the invitation but wanted to participate, we provided blank questionnaires at the registration table and before the discussion groups. Those who met the criteria for either group were allowed to participate in the discussion groups. We excluded individuals from industry associations whom we previously interviewed and state insurance regulatory agencies, as their views were captured in the GAO- administered Web survey. Second, we selected a non-statistical sample of 11 RRGs that operate on a multistate basis and represent a variety of business areas, insurance products, domiciliary states, and a range of direct written premiums to obtain their perspectives on the regulatory treatment of RRGs across domiciliary and nondomiciliary states. We excluded RRGs that we previously interviewed and RRGs that domiciled and operated only in their domiciliary state. We are not able to generalize results from this sample to the entire RRG industry. To obtain comparable data, we covered the same topics in these interviews as in the discussion groups noted above. To determine the extent to which state and federal regulatory practices affecting RRGs have changed since 2004, we reviewed regulations, guidance, and legislative and regulatory proposals and interviewed stakeholders. More specifically, we reviewed NAIC literature and guidance to state insurance departments about RRG oversight. We also interviewed NAIC officials about efforts to address recommendations from our 2005 report, including revisions to NAIC’s state accreditation process and progress with developing and implementing corporate governance standards for RRGs. We attended NAIC working group meetings concerning implementation of accreditation standards and approval of updates to the RRG Handbook and corporate governance standards for RRGs. We also obtained information about RRGs’ regulatory environment and views on the potential impact of NAIC’s changes to the accreditation standards from the 13 select domiciliary and nondomiciliary state insurance regulators mentioned above. In addition, we obtained information on any changes to state regulations affecting RRGs since 2004 through our Web-based survey of regulators. We interviewed an actuarial expert about the revisions to the accreditation standards. Furthermore, we obtained views from representatives of RRGs on their primary challenges and NAIC’s efforts to establish broad based uniform standards for the oversight of RRGs. More specifically, we spoke with the discussion group participants and representatives from 11 select RRGs mentioned in the previous paragraph. The criteria for selection of these RRGs are described above. We excluded those RRGs we already interviewed and RRGs that domiciled and operated in only their home state. We are not able to generalize results from this sample to the entire RRG industry. We also reviewed documentation we received from RRG representatives related to their regulatory experiences and the expected impact of the revised accreditation standards. Finally, we reviewed key legislation concerning RRGs that had been introduced at the federal and state level since 2004 to identify recent changes in laws and regulations affecting RRGs. We conducted this performance audit from October 2010 to December 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Patrick A. Ward, Assistant Director; Susan Baker ;William Chatlos; Shamiah Kerney; Jill Lacey; May Lee; Marc Molino; Patricia Moye; Daniel Newman; Jasminee Persaud; and Barbara Roesmann also made major contributions to this report.
Congress authorized the creation of risk retention groups (RRG)--a group of similar businesses that creates its own insurance company to insure its risk--to increase the affordability and availability of commercial liability insurance. Through the Liability Risk Retention Act (LRRA), Congress partially preempted state insurance laws to allow RRGs licensed in one state (the domiciliary state) to operate in all other states (nondomiciliary states) with minimal additional regulation. In a 2005 report (GAO-05-536), GAO noted concerns with the adequacy of RRG regulation. This report (1) describes changes in the financial condition of the RRG industry from 2004 to 2010; (2) examines the regulatory treatment of RRGs across domiciliary and nondomiciliary states; and (3) examines changes to federal and state regulatory practices regarding RRGs since 2004. GAO analyzed RRG financial data, surveyed state insurance regulators (96 percent response rate), and interviewed RRG industry representatives. Certain indicators suggest that the financial condition of the RRG industry in aggregate generally has remained profitable. In 2003, RRGs wrote about $1.8 billion, or 1.17 percent of commercial liability insurance. In 2010, RRGs continued to comprise a small percentage of the total market, writing about $2.5 billion--or about 3 percent of commercial liability coverage. Other financial indicators, such as ratios of RRG premiums earned compared to claims paid--also suggest profitability. In addition, the number of RRGs has increased since 2004, with the most growth occurring in health care-related lines. In 2010, more than 80 percent of RRGs were domiciled in Vermont, South Carolina, the District of Columbia, Nevada, Hawaii, and Arizona, but RRGs wrote about 95 percent of their premiums outside their state of domicile. Evidence suggests that RRGs may choose to domicile in a particular state, partly due to some financial and regulatory advantages such as lower minimum capitalization requirements. RRG representatives opined that RRGs have expanded the availability of commercial liability insurance--particularly in niche markets--but differed in their opinions of whether RRGs have improved its affordability. Different interpretations of LRRA have led to varying state regulatory practices and requirements in nondomiciliary states and disputes between state regulators and RRGs in areas such as registration requirements, fees, and types of coverage RRGs may write. For example, while some states have interpreted LRRA to permit RRGs to write contractual liability coverage, others have not, and therefore may not allow RRGs to write this coverage in their state. RRGs have challenged requirements established by nondomiciliary states that RRGs assert are not permitted by LRRA. However courts also have differed in their interpretations of LRRA. Some regulators with whom GAO spoke indicated that their actions toward nondomiciled RRGs reflect an effort to use their limited regulatory authority to protect insureds in their states as well as address concerns about RRG solvency. Some state regulatory practices for RRGs have changed since 2004, and federal legislation has been proposed. In 2005, GAO recommended implementation of more uniform, baseline state regulatory standards, including corporate governance standards to better protect RRG insureds. The National Association of Insurance Commissioners (NAIC) has since revised its accreditation standards to more closely align with those for traditional insurers which are subject to oversight in each state in which they operate. For example, all financial examinations of RRGs that have commenced during or after 2011 should use the risk-focused examination process. NAIC also has begun developing corporate governance standards that it plans to implement in the next few years. Proposed legislation would amend LRRA to allow RRGs to provide commercial property insurance and also include a federal arbitrator to resolve disputes between RRGs and state insurance regulators. While some RRG representatives and state regulators supported this legislation, others expressed concerns about whether RRGs would be adequately capitalized to write commercial property insurance and about federal involvement in state regulation. To further facilitate states' implementation and help reduce the varying interpretations of LRRA, Congress should consider the merits of clarifying certain LRRA provisions regarding registration requirements, fees, and coverage. NAIC concurred with this matter for congressional consideration.
Cellular telephones, first marketed in 1983, have become one of the fastest selling consumer electronic products. By the end of 1993, over 16 million Americans were using cellular telephones, and the industry estimates that in less than a decade, over 60 million Americans will be using a cellular communications device. About one-third of all cellular telephones currently in use are hand-held portable models, which are growing in popularity. Industry forecasters predict a high demand for a new generation of personal communications devices that will offer a greater range of uses. Technology enthusiasts envision a future in which nearly all Americans will have a wireless portable communications device. Cellular telephones come in a variety of styles, but all fall into the following three general categories: car telephones, in which the telephone is installed in the vehicle and the antenna is mounted on the roof, trunk, or rear window; transportable telephones, in which the telephone body, antenna, and handset are carried in a briefcase or bag, but the handset is separated from the body and antenna for use; and portable telephones, in which a self-contained handset houses a battery and an antenna in a unit generally small enough to fit in a purse or pocket. Portable cellular telephones are the subject of this report because—unlike with car telephones and transportable telephones—their antenna is very close to the user’s head when the telephone is in use. Figure 1.1 shows some typical models of portable cellular telephones and the proximity of the antenna to the user’s head. (From left to right) Telephone A is an example of the first style of hand-held portable cellular telephone; it is characterized by a bulky body and a nonretractable antenna. It is heavier than most of the newer portable cellular telephones. Telephone B is an example of the “flip-style” cellular telephone; it features a mouthpiece that can be folded over the keypad and a retractable antenna for storage while not in use. Telephone C is an example of a nonflip-style telephone; it has a shorter nonretractable antenna. Telephone D is the newest style of portable cellular telephone; it is designed to transmit and receive digital signals. All devices that transmit radio signals—such as radio broadcast towers and cellular telephones—emit radio-frequency radiation. Radio-frequency radiation is electromagnetic energy emitted in the form of waves. Cellular telephones transmit voice messages by sending electronic signals from an antenna over radio waves at frequencies between 824 and 894 megahertz (MHz). These signals are a form of radio-frequency radiation. At sufficient power levels, radio-frequency radiation can heat body tissue and cause biological damage such as burns. These effects of exposure to radio-frequency radiation, called thermal effects, are immediately observable. According to the 1982 American National Standards Institute’s (ANSI) standard for radiation exposure, a nongovernment standard that some federal agencies use, devices operating on 7 or less watts of power at frequencies below 1,000 MHz will not produce immediate thermal effects.Portable cellular telephones operate on well below 7 watts of power. They use up to a maximum of 0.6 watts of power—less than the amount of power required to light a flashlight bulb. However, questions have been raised about whether long-term or frequent exposures to low levels of radio-frequency radiation have other biological effects that are delayed or not immediately observed in human cells and animals. Portable cellular telephones transmit messages to a cellular transmitter tower. More power is required to transmit a signal when the telephone is farther away from a tower. For example, if a caller is located at a great distance from the tower, the telephone may use the full 0.6 watts of power to transmit the signal. However, if the caller is near the tower, the telephone may only need to use about 0.2 watts of power to transmit the signal. Cellular telephones transmit either analog or digitized voice messages, depending on the type of cellular telephone used and the service available. In analog radio communication systems, messages are transmitted by modulating, or varying, either the amplitude (height) or the frequency (number of wave crests) of the radio wave. In digital communication systems, messages are transmitted as a series of digits in rapid bursts, or pulses. These are sometimes referred to as pulse-modulated signals. An advantage of digital transmission is that it increases channel capacity by allowing several users to transmit messages over the same radio wave simultaneously. As figure 1.2 shows, analog signals are continuous radio waves, while digital signals are binary—usually represented by ones and zeroes. (See app. I for additional information on these two technologies.) The next generation of cellular communications is called personal communications services. In this system, inexpensive, pocket-sized communications devices that use digital technology will deliver voice, data, and images. They will operate at higher radio frequencies (between 1,850 and 2,200 MHz) and will likely use less power to operate than the current generation of portable cellular telephones. A personal communications device carried from place to place will enable the person to be reached at any location by dialing a single telephone number. Because personal communications services devices are still under development, it is not clear whether the antenna will be in close proximity to the user’s head when the device is in use. Three federal agencies play a role in ensuring the safety of cellular telephones by sharing responsibility for regulating devices that emit radio-frequency radiation and protecting the public from exposure to radiation: the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), and the Federal Communications Commission (FCC). Under the Radiation Control for Health and Safety Act of 1968, as amended, FDA is responsible for establishing and carrying out a program, designed to protect public health and safety, to control radiation from electronic products. These responsibilities include (1) developing and administering performance standards for electronic products; (2) planning, conducting, coordinating, and supporting research, development, training, and operational activities to minimize the emissions of, and exposure of people to, unnecessary radiation from electronic products; and (3) developing, testing, and evaluating the effectiveness of procedures and techniques for minimizing exposure to electronic product radiation. FDA has the authority to set performance standards for electronic products if it determines that such standards are necessary for the public health and safety. In carrying out its responsibilities, FDA reviews and comments on industry research and also works with electronic product manufacturers when it receives complaints or has some concerns about a product but lacks sufficient scientific evidence to determine if a performance standard is necessary. Consistent with the principle of keeping exposure “as low as reasonably achievable,” FDA has worked with a variety of manufacturers to reduce radiation emissions. For example, FDA has worked with manufacturers of video display terminals and police radar devices to address concerns about excessive exposure to radiation and with manufacturers of electric blankets to redesign the blankets to reduce electric and magnetic fields. Under the Federal Radiation Council Authority, transferred to EPA by Reorganization Plan No. 3 of 1970, EPA is responsible for, among other things, advising the President on radiation matters, including providing guidance for all federal agencies on formulating protective standards on radiation exposure. Upon presidential approval of EPA’s recommendation on formulating standards, the pertinent federal agencies would be responsible for implementing the guidance. Under the National Environmental Policy Act of 1969 (NEPA), FCC is required to consider whether its actions—including actions that may lead to human exposure to radio-frequency radiation—in authorizing communications equipment significantly affect the quality of the human environment. The Chairman of the Subcommittee on Telecommunications and Finance, House Committee on Energy and Commerce, requested that we review (1) the status of scientific knowledge on the potential health risks of radio-frequency radiation emitted by portable cellular telephones and federal involvement in any related research and (2) the actions of the responsible federal agencies to ensure the safety of portable cellular telephones and similar communications devices. To assess the status of scientific knowledge on the health risks of portable cellular telephone use, we met with scientists who have conducted research on cellular telephones and visited industry, university, and government laboratories where research is taking place. We met with scientists and researchers in the field of electromagnetic radiation at the Department of Defense, EPA, FCC, FDA, and the National Academy of Sciences. (See app. II for a list of the researchers and scientists we consulted for this report.) We also obtained the opinions of many federal agencies with representation on the Committee on Interagency Radiation Research and Policy Coordination within the Executive Office of the President. We discussed the safety of portable cellular telephones with the president of the Bioelectromagnetics Society; the co-chairs of a subcommittee established by the Institute of Electrical and Electronics Engineers, Inc., which set the latest exposure standard for radio-frequency radiation exposure; and a vice-president of Motorola, Inc., a leader in cellular telephone research. In addition, we met with officials from the National Council on Radiation Protection and Measurements and the Cellular Telecommunications Industry Association. We collected information on regulatory actions regarding the safety of portable cellular telephones from the responsible federal agencies. We discussed with FCC officials the actions they have taken to ensure the safe use of cellular telephones. We examined FCC’s records and rulemakings on the agency’s process for authorizing portable cellular telephones and FCC’s implementation of requirements under NEPA. We discussed with FDA officials their procedures for setting performance standards for electronic products and their plans for cellular telephones. Finally, we discussed with EPA officials, and reviewed documents on, EPA’s efforts to develop federal guidance for setting standards for human exposure to radio-frequency radiation. We conducted our review between March 1993 and October 1994 in accordance with generally accepted government auditing standards. To date, neither the federal government nor the telecommunications industry has completed any studies to determine specifically if the use of portable cellular telephones poses health risks. While a few recent studies suggest that long-term exposure to low levels of radio-frequency radiation (similar to that emitted by portable cellular telephones) may prompt interactions within and among cells and organs that could possibly lead to adverse effects, other studies do not. FDA and EPA agree that the research completed to date is insufficient to determine whether using portable cellular telephones presents risks to human health. The two basic sources of evidence of the relationship between a potential risk factor, such as exposure to radio-frequency radiation, and a disease are epidemiological studies (statistical studies that relate the occurrence of a disease to the characteristics of people and their environment) and laboratory studies on animals and biological tissue samples. According to FDA and the National Science Foundation, both types of research are needed to determine whether cellular telephone use poses any health risks. To date, no epidemiological studies have been conducted of human exposure to radio-frequency radiation as a result of using cellular telephones. Some recent biological and behavioral laboratory studies on animals and cell samples have provided information on the potential health effects posed by low-level exposure to radio-frequency radiation, although none has examined radiation exposure specifically from cellular telephones. FDA has questioned the interpretation, significance, or applicability of the studies’ findings to cellular telephones. According to EPA, the significance of recent research suggesting a potential for adverse health effects cannot be determined until these studies have been independently confirmed. Because of the limitations of the research, FDA and EPA agree that more research would be necessary to determine whether portable cellular telephones pose a human health risk. The following are examples of some research results that scientists say have raised questions about exposure to low-level radiation similar to that emitted by portable cellular telephones, especially pulse-modulated radiation, which is comparable to digital signals. (See app. III for more information about some of these studies and app. IV for a list of other relevant studies.) A University of Washington study found that rats had difficulty learning a maze exercise after 45 minutes of exposure to low-level, pulsed radio-frequency radiation near the frequencies that personal communications devices will use. The researchers concluded that exposure to low-power radio-frequency radiation appears to decrease certain chemical agents in the rodents’ central nervous system essential for spatial learning. In a 1983 study of cells from the immune system, the researchers found that the effectiveness of certain immune system cells in fighting off tumor cells was temporarily diminished after only 4 hours of exposure to low-power, pulsed radio-frequency radio signals. The researchers found that the effectiveness of the immune system cells was diminished most when the radio-frequency radiation was pulse-modulated 60 times per second, slightly more than the 50 times per second that digital cellular telephone signals “pulse.” (See app. I for information on digital signals.) In a 1991 study, the researchers found that low-power radio-frequency radiation may facilitate the development of cancer in the presence of other substances known to cause cancer. They found that when cells were exposed for 24 hours to low-level, pulsed radio-frequency radiation alone, there was no effect on the cells’ survival or transformation into tumor cells. However, when the cells were treated with a tumor-promoting chemical, exposure to radio-frequency radiation significantly enhanced the transformation of the cells into tumor cells. Although these and a few other studies suggest that exposure to low levels of radio-frequency radiation may cause effects in animals and certain cell systems, other studies do not. For example, in a 1993 study, researchers injected brain tumor cells into rats and exposed them to low levels of radio-frequency radiation—near the frequency that cellular telephones use—that was either continuous (as in analog technology) or pulsed 50 times per second (as in digital technology). The rats were exposed for 5 days a week until clinical signs of tumor development occurred. Researchers found no evidence that radio-frequency radiation treatment altered the course of tumor development in the rats. Several federal agencies sponsor radiation research, but none has sponsored or performed any studies on portable cellular telephones. Of 15 federal departments and agencies we contacted, only 4 had conducted, funded, or planned research on radio-frequency radiation that these agencies said may be relevant to questions about the safety of cellular telephones. These four were FDA, the National Institutes of Health’s National Cancer Institute (NCI), the Department of Commerce’s National Institute of Standards and Technology, and the Department of Defense. Only NCI has planned research that specifically focuses on portable cellular telephone use. FDA is not performing or contracting for research specifically addressing the power levels or frequencies of cellular telephones. However, FDA officials said that some research the agency supports may be relevant to safety questions about these telephones. According to officials, FDA-supported research at the Johns Hopkins Applied Physics Laboratory found that permanent damage occurred to the eyes of test animals when the animals were exposed to low-level microwave radiation. According to one of the researchers, this effect was enhanced when the test animals were treated with drugs commonly used in glaucoma treatment and exposed to radio-frequency radiation at power levels several times lower than those typically emitted by portable cellular telephones. In 1993, NCI launched an epidemiological study to assess the relationship between the use of cellular telephones, among other variables, and the brain cancer newly diagnosed in 800 patients. An NCI official expects this study to be completed between 1998 and 1999. In addition, NCI has planned other epidemiological studies to determine whether (1) exposure to radio-frequency radiation, among other possible risk factors, is associated with an increased risk of brain tumors, and (2) the incidence of cancer can possibly be linked with the use of portable cellular telephones. These studies involve comparing the names on lists of cellular telephone users in New York State with the names on New York’s statewide cancer registry. According to NCI, these studies should be initiated during 1995. However, it is important to note that epidemiological studies do not prove causality between two factors; they merely show that two factors, such as exposure to radio-frequency radiation and a disease such as cancer, tend to occur together. In 1990, NIST measured the amount of radiation emitted by portable police radios operated at frequencies near those used by portable cellular telephones. NIST researchers found that the strength of the electric fields emanating from the police radios exceeded the exposure levels recommended as safe under the 1982 ANSI standard. However, this study did not attempt to assess whether exposure to these electric field emissions could present risks to human health. DOD is sponsoring research into the biological effects of radio-frequency radiation but not radiation from portable cellular telephones. However, with the anticipated proliferation of new telecommunications devices, DOD supports continued work to characterize and measure the absorption and distribution of radio-frequency energy in the human body. The Department’s official position is that harmful effects will not occur as a result of exposure to portable cellular telephones as long as the amount of radio-frequency energy absorbed by the human body is maintained at or below permissible levels. DOD relies on the “permissible levels” recommended by the 1982 ANSI standard, which states that devices operating on 7 watts of power or less, like portable cellular telephones, are not likely to exceed permissible levels. We identified two major efforts by the cellular telephone industry to specifically address the safety of portable cellular telephones: one sponsored by Motorola, Inc., and one proposed by the Cellular Telecommunications Industry Association (CTIA), a cellular telephone industry association. In 1991, Motorola, Inc., entered into a multiyear contract with a researcher—considered by many in the scientific community to be the most eminent U.S. researcher in this area—to conduct a series of laboratory studies on radio-frequency radiation from portable cellular telephones. These studies are examining the effects of analog and digital signals from these telephones on animals and cells but do not include studies of effects on humans. Results from the animal studies are anticipated within the year. In January 1993, in response to public concern that portable cellular telephones may cause health risks, including brain cancer, CTIA announced an initiative to spend from $15 million to $25 million over the next 3 to 5 years to fund studies addressing the safety of portable cellular telephones. In May 1993, CTIA, along with other members of the cellular telephone industry, established a Science Advisory Group on Cellular Telephone Safety. The science advisory group’s planned research agenda includes multidisciplinary studies involving epidemiology, cell cultures, test animals, and genetic research. The research will examine the effects of exposure to analog and digital radio-frequency radiation at the power levels and frequencies that cellular telephones use and that personal communications devices will use. The research agenda also includes scientific peer review of proposed research projects by a separate board coordinated through the Harvard University Center for Risk Analysis. The chairman of the science advisory group also informed us that CTIA funds the group’s activities on a monthly basis; each month the chairman submits an estimate of costs for the coming month, and CTIA provides money for that month’s research activities. The chairman explained that the peer review board will evaluate and recommend research proposals for funding. According to the chairman, payment for peer review activities will be provided through a blind trust established by the advisory group. The chairman stated that the purpose of creating the blind trust for peer review was to provide independence. However, the science advisory group does not enjoy similar financial independence. The direct funding of the research by CTIA raises questions about the objectivity and credibility of the research effort. In September 1994, the chairman of the science advisory group told us that CTIA would consider giving up direct financial control by putting the research funds into a blind trust fund. In September 1993, FDA told the chairman of the science advisory group that the agency would like to provide appropriate support within its means to assist in ensuring that the industry-sponsored research program was successful and credible. As a regulatory agency, FDA considers that reviewing research data and commenting on it is part of its job. However, the agency is reluctant to endorse research that is not yet completed resulting from programs it has not helped direct. Although the science advisory group has sought input from federal agencies and has had informal discussions with officials at FDA and EPA, no mechanism has been established for federal participation in or comments on the research program. However, in September 1994 the advisory group’s chairman told us that he was open to any role for federal agencies to increase the acceptance and usefulness of the research program. FDA and EPA believe that there is insufficient evidence to determine whether exposure to low-level radio-frequency radiation presents a human health risk. Some recent studies have found that this radiation can produce biological effects. However, because none of these studies examined radio-frequency radiation specifically from portable cellular telephones, FDA and EPA agree that the value of the studies’ findings is limited in determining whether using portable cellular telephones poses risks to human health. FDA and National Science Foundation officials said that both epidemiological and laboratory research are needed to determine whether portable cellular telephones present risks to users. The federal government and private industry are beginning to undertake some of this needed research. NCI (the only federal agency performing research on the safety of cellular telephones) has started an epidemiological study to determine if there is a relationship between cellular telephone use and cancer. But epidemiological studies alone cannot conclusively establish whether using portable cellular telephones poses health risks. Motorola is funding a series of laboratory studies on the effects of radiation from portable cellular telephones on animals and cells but no epidemiological studies observing the effects on humans. The cellular telephone industry is sponsoring a research initiative through a science advisory board that includes both types of research that federal officials say is needed. However, direct funding of this research by CTIA—an industry association—raises questions about the independence and objectivity of the science advisory group’s planned research program. The chairman of the science advisory group has had informal discussions with federal agencies and has expressed a willingness to accept a greater federal role to increase the independence and objectivity of the research. Such a role could also increase the usefulness of the research results to federal regulators. To date, neither the science advisory group nor any of the federal agencies have attempted to define what this role might entail. Given the current state of scientific knowledge, FDA and EPA have not had a basis for taking regulatory actions on portable cellular telephones. However, FDA, EPA, and FCC are undertaking or considering limited activities that could affect the use of such telephones. FDA is working with cellular telephone manufacturers on possible design changes for these telephones and improved instructions for use. EPA is sponsoring a study on the status of research on the effects of exposure to low levels of radio-frequency radiation to determine if protective guidance is needed on exposure to radiation from devices such as cellular telephones. FCC has proposed adopting the revised ANSI standard in its environmental rules and, as a result, may no longer exempt portable cellular telephones from routine radiation evaluation. An FDA official told us that FDA has primary responsibility for responding if communications devices, such as portable cellular telephones, pose a health risk. Although FDA says there is no evidence that cellular telephones are harmful, an FDA official stated that recent research on exposure to low-level radio-frequency radiation from other sources has the agency concerned about the possible adverse health effects of this type of radiation. In carrying out its responsibility for controlling public exposure to radiation from electronic products, FDA follows the principle that exposure to radiation should be kept to a level as low as can reasonably be achieved. In early 1993, following allegations about the safety of portable cellular telephones, FDA met with the cellular telephone industry, including industry associations and cellular telephone manufacturers. The purpose of these meetings was to discuss potential problems and their solutions. As a result of these meetings, cellular telephone manufacturers agreed to examine all practical routes to reduce exposure, including possibly redesigning the telephones and providing users with adequate instructions for proper use. The goal of redesigning these telephones would be to change the placement of the antenna so that this source of radiation is farther from the user’s head. According to an FDA official, instructions for use should include practical information on how users can limit their exposure. Although the industry representatives who met with FDA agreed to set up committees to work on these topics, as of October 1994, they had not reported back to FDA on the status of their efforts. Meanwhile, FDA says that if individuals are concerned about avoiding even potential risks, they could consider holding lengthy conversations on conventional telephones and reserving the hand-held cellular telephones for shorter conversations or for situations in which conventional telephones are not available. FDA does not believe it is justified in setting performance standards for cellular telephones at this time. The formal process for setting performance standards for electronic products is time-consuming and expensive, and FDA will not set them without clear scientific evidence that an electronic product poses a hazard to human health. FDA does not have such evidence for portable cellular telephones. In addition, an FDA official stated that the agency has received no reports through its complaint process of radiation injuries resulting from the use of cellular telephones. FDA officials said that the agency has invested its limited research resources into higher-priority work, such as medical devices that expose individuals to much higher levels of radio-frequency radiation than cellular telephones. EPA is responsible for advising the President on radiation matters, including developing federal guidance on radiation protection that can be used by other federal regulatory agencies. For example, FCC could use such guidance in approving communications equipment and FDA in determining if performance standards are needed for devices like portable cellular telephones. EPA officials told us that the agency expects to issue, by the end of 1994, recommended maximum permissible levels of exposure to radio-frequency radiation to protect people from immediate thermal effects. However, EPA officials also told us that because research on exposure to lower levels of radio-frequency radiation is inconclusive, the agency cannot issue any guidance for these exposures. To gain a better understanding of the status of research on the effects of long-term exposure to low levels of radiation and future research needs, EPA has funded a 2-year study by the National Council on Radiation Protection and Measurements, a nonprofit corporation chartered by the Congress. EPA officials expect this work to provide information that will be helpful for understanding whether the agency needs to provide protective guidance on exposure to low levels of radiation. EPA’s recent activities on radiation guidance followed a 1992 report by the agency’s Science Advisory Board. The board recommended that EPA complete a process to provide guidance that it began in the late 1970s. As part of this process, EPA requested comments on four alternative approaches for controlling public exposure to radio-frequency radiation.However, EPA discontinued its efforts to issue guidance in 1988 when it did not obtain agreement from federal agencies on which approach it should take. FCC is responsible for regulating cellular telephone service and authorizing the equipment used in providing that service. NEPA requires all federal agencies to consider whether their actions significantly affect the human environment. In carrying out its responsibilities under NEPA, FCC formulated environmental rules that require the Commission to consider whether its actions—including actions that may lead to human exposure to radio-frequency radiation—significantly affect the quality of the human environment. FCC does not consider itself a health agency with the expertise to determine what levels of radiation exposure are unsafe. Instead, it relies on health and radiation expertise found in other federal agencies, such as FDA and EPA. According to an FCC official, FCC considers FDA the principle agency responsible for determining the health implications of using specific devices such as cellular telephones and for issuing performance standards. Similarly, FCC would prefer to rely on EPA for information on exposure to radio-frequency radiation. Because there are no federal guidelines on radiation exposure, in 1985 FCC incorporated the 1982 ANSI exposure standard into its environmental rules. This standard applies to higher-powered transmitting equipment, such as radio and television broadcast towers, but excludes devices that operate on or below 7 watts of power at frequencies below 1,000 MHz. FCC does not require routine environmental evaluation of portable cellular telephones in authorizing their use because they operate on less than 1 watt of power. However, as a safeguard, FCC’s rules permit any interested party, including FCC, to move that the exempted equipment be required to undergo environmental evaluation. Thus far, no such motion has been made about portable cellular telephones. In addition, the Commission considers portable cellular telephones safe under this standard. (See app. V for more information on the evolution of FCC’s environmental rules and rules on cellular telephone service.) In 1993, FCC proposed adopting the revised version of the ANSI standard to update its environmental rules. According to an FCC official, the revised version is more stringent than the older version, and, for the first time since FCC began regulating cellular telephone service, portable cellular telephones could be subject to environmental evaluation. Until this new standard is adopted, cellular telephones will continue to be excluded from routine environmental evaluation for public exposure to radiation. In contrast, FCC has already decided that it will require certain emerging hand-held personal communications services devices to comply with the revised ANSI standard, pending its adoption of this standard in its environmental rules. FDA, EPA, and FCC are undertaking limited activities that may affect the use of portable cellular telephones. Without additional scientific information, FDA and EPA have no basis for taking regulatory actions. The federal and industry research discussed in chapter 2 could provide information that would help these agencies determine whether any regulatory actions are needed. We recommend that the Commissioner of the Food and Drug Administration and the Administrator of the Environmental Protection Agency, in coordination with the Chairman of the Federal Communications Commission, work with the industry’s Science Advisory Group on Cellular Telephone Safety to maximize the usefulness, independence, and objectivity of its planned research initiative. This effort could include participating in the selection of research proposals to determine whether they meet federal research standards and reviewing research results. This effort would be in addition to ongoing and planned federal research. As requested, we did not obtain written agency comments on a draft of this report. However, we discussed the information in the report with officials from FDA’s Office of Science and Technology, including the Chief of the Radiation Biology Branch; EPA’s Office of Radiation and Indoor Air, including the Electromagnetic Fields Team Leader in the Radiation Studies Division; and FCC’s Office of Engineering and Technology, including the Chief Engineer. These officials generally agreed that the information was accurate. The FDA and EPA officials agreed that the current state of scientific knowledge is insufficient to determine whether cellular telephones pose health risks. The agencies assisted us in characterizing the scientific studies and brought us up to date on their most recent activities related to radio-frequency radiation exposure and cellular telephones. The FDA and EPA officials said they plan to review the industry’s completed research. We also asked officials from the National Cancer Institute’s Division of Cancer Etiology, the National Institute on Standards and Technology’s Management and Organization Division, and the Department of Defense’s Office of the Undersecretary of Defense for Acquisitions and Technology to review the information in the sections of this report pertaining to their agency. These officials generally agreed that the information provided in this report was accurate, and we incorporated their comments where appropriate.
Pursuant to a congressional request, GAO reviewed the biological effects of radio-frequency radiation emitted by portable cellular telephones and the federal government's regulatory actions to ensure the safety of these telephones. GAO found that: (1) no research has been completed on long-term human exposure to low levels of radiation from portable cellular telephones, and research findings on exposure to other sources of low-level radio-frequency radiation are inconclusive; (2) existing research does not provide enough evidence to determine whether portable cellular telephones pose a risk to human health; (3) although the cellular telecommunications industry is planning to carry out both epidemiological and laboratory studies on the effects of portable cellular telephone use on human health, federal regulators need to ensure that these studies are carried out objectively; (4) the Food and Drug Administration (FDA) is working with cellular telephone manufacturers to minimize cellular telephone users' exposure to radiation; (5) the Environmental Protection Agency (EPA) is assessing the status of scientific knowledge on prolonged exposure to radio-frequency radiation; and (6) the Federal Communications Commission (FCC) has relied on a 1982 American National Standards Institute (ANSI) safety standard to regulate cellular telephones, but is considering adopting the revised version of the ANSI standard for equipment it approves for use.
For years, auditors have reported long-standing weaknesses in DOD’s ability to promptly pay its bills and accurately account for and record its disbursements. Numerous of our and DOD Inspector General audit reports have cited deficiencies in management oversight, a weak internal control environment, flawed financial management systems, complex payment processes, delinquent and inaccurate commercial and vendor payments, and lax management of DOD’s travel card programs. Those deficiencies have resulted in billions of dollars in unrecorded or improperly recorded disbursements, over- and underpayments or late payments to contractors, and fraudulent or unpaid travel card transactions. DOD’s disbursement processes are complex and error-prone. Although DFAS is responsible for providing accounting services for DOD, military service and other defense agency personnel play a key role in DOD’s disbursement process. In general, military service and defense agency personnel obligate funds for the procurement of goods and services, receive those goods and services, and forward obligation information and receiving reports to DFAS. Separate DFAS disbursing offices and accounting offices then pay the bills and match the payments to obligation information. Several military services and DOD agencies can be involved in a single disbursement and each has differing financial policies, processes, and stand-alone, nonstandard systems. As a result, millions of disbursement transactions must be keyed and rekeyed into the vast number of systems involved in any given DOD business process. Also, transactions must be recorded using an account coding structure that can exceed 75 digits and this coding structure often differs—in terms of the type, quantity, and format of data required—by military service. DFAS’s ability to match disbursements to obligation records is complicated by the fact that DOD’s numerous financial systems may contain inconsistent or missing information about the same transaction. Input errors by DFAS or service personnel and erroneous or missing obligation documents are two of the major causes of inconsistent information. For calculating and reporting performance metrics related to payment recording errors, officials from the Comptroller’s office included the following categories. Unmatched disbursements—Payments that were made by a DFAS disbursing office and received by a DFAS accounting office but have not yet been matched to the proper obligation. Negative unliquidated obligations—Payments that have been matched to and recorded against the cited obligations but which exceed the amount of those obligations. Intransits—Payments that have not yet been received by the DFAS accounting office for recording and matching against the corresponding obligation. Suspense account transactions—Payments that cannot be properly recorded because of errors or missing information (e.g., transactions that fail system edit controls because they lack proper account coding) and are therefore temporarily put in a holding account until corrections can be made. For DOD to know how much it has spent and/or how much is still available for needed items, all transactions must be promptly and properly recorded. However, we reported as early as 1990 that DOD was unable to fully identify and resolve substantial amounts of payment recording errors. We also stated that DOD’s early reporting of these errors significantly understated the problems. For example, DFAS excluded $14.8 billion of intransits from its 1993 benchmark against which it measured and reported its progress in reducing recording problems in later years. In addition, DOD excluded suspense account transactions from its reporting of payment recording errors until as late as 1999. Finally, when negative unliquidated obligations, intransits, and suspense account transactions were reported, they were reported using net rather than absolute values. DFAS has overall responsibility for the payment of invoices related to goods and services supplied by commercial vendors. As part of a reorganization effort in April 2001, DFAS separated its commercial payment services into two efforts—contract pay and vendor pay. Contract pay handles invoices for formal, long-term contract instruments that are typically administered by the Defense Contract Management Agency (DCMA). These contracts tend to cover complex, multiyear purchases with high dollar values, such as major weapon systems. Payments for contracts are made from a single DFAS system— Mechanization of Contract Administration Service (MOCAS). For fiscal year 2001, DFAS disbursed about $78 billion for over 300,000 contracts managed in MOCAS. The vendor pay product line handles invoices for contracts not administered by DCMA, plus miscellaneous noncontractual payments such as utilities, uniforms/clothing, fuels, and food. Vendor pay is handled by 15 different systems throughout DFAS and, annually, DFAS personnel pay nearly 10 million vendor invoices in excess of $70 billion. In general, DOD makes vendor payments only after matching (1) a signed contractual document, such as a purchase order, (2) an obligation, (3) an invoice, and (4) a receiving report. If any one of these components is missing, such as an obligation not being entered into the payment system, payment of the invoice will be delayed. According to DOD officials, approximately 80 percent of payment delinquencies are due to the delayed receipt of receiving reports by DFAS from the military service activities. DOD implemented the current travel card program in November 1998, through a DOD task order with Bank of America. This was in response to the Travel and Transportation Reform Act of 1998 (P.L. 105-264), which modified the existing DOD Travel Card Program by mandating that all government personnel must use the government travel card to pay official travel costs (for example, hotels, rental cars, and airfare) unless specifically exempted. The travel card can also be used for meals and incidental expenses or to obtain cash from an automatic teller machine. The intent of the travel card program was to provide increased convenience to the traveler and lower the government’s cost of travel by reducing the need for cash advances to the traveler and the administrative workload associated with processing/reconciling travel advances. DOD’s travel card program, which is serviced through Bank of America, includes both individually billed accounts and centrally billed accounts. When the travel card is submitted to a merchant, the merchant will process the charge through its banking institution, which in turn charges Bank of America. At the end of each banking cycle (once each month), Bank of America prepares a billing statement that is mailed to the cardholder (or account holder) for the amounts charged to the card. The statement also reflects all payments and credits made to the account. For both individual and centrally billed accounts, Bank of America requires that the cardholder make payment on the account in full within 30 days of the statement closing date. If the cardholder—individual or agency—does not pay the monthly billing statement in full and does not dispute the charges within 60 days of the statement closing date, the account is considered delinquent. For individually billed accounts, within 5 business days of return from travel, the cardholder is required to submit a travel voucher claiming legitimate and allowable expenses, which must be reviewed and approved by a supervisor. DOD then has 30 days in which to make reimbursement. Although DOD, like other agencies, relies on its employees to promptly pay their individually billed accounts, DOD does have some tools to monitor travel card activity and related delinquencies, including Bank of America’s Web-based Electronic Account Government Ledger System (EAGLS). Using EAGLS, supervisors can obtain reports on their cardholders’ transaction activity and related payment histories. For the centrally billed accounts, the travel office at each military installation or defense agency must first reconcile the charges shown on the centrally billed travel charge card account with the office’s internal records of transportation requests. After reconciliation has been completed, the voucher is sent to DFAS for payment. Because the travel card program is fairly new, DOD does not have a long history of reporting statistics for delinquencies. However, in our previous reports and testimonies, we have reported that DOD’s individually billed delinquency rate is higher than that of other federal agencies. As of September 2002, DOD’s delinquency rate was approximately 7.3 percent, about 3 percent higher than other federal agencies. Among the military services, however, the Air Force had the lowest delinquency rate. As of September 2002, the Air Force delinquency rate was 4.8 percent, significantly lower than the rest of DOD. Even though the Air Force had lower numbers of delinquent accounts, we found that control environment weaknesses and breakdowns in key controls were departmentwide and that these deficiencies led to instances of potential fraud and abuse with the use of travel cards in all the military services. In 1998, DFAS developed its Performance Contract to focus on continued achievement of its mission to provide responsive, professional finance and accounting services to DOD. As part of this contract with DOD, DFAS defined its performance objectives and identified specific performance measurement indicators. DFAS managers—and sometimes staff—are rated and rewarded based on their ability to reach annual reduction goals for each indicator. Performance metrics are now calculated monthly and the DFAS Director and the DOD Comptroller regularly review the results. Section 1008 of the National Defense Authorization Act for Fiscal Year 1998 (P.L. 105-85) directed the Secretary of Defense to submit a biennial strategic plan for the improvement of financial management to the Congress. In conjunction with the plan, the DOD Comptroller decided to develop a performance measurement system—a set of departmentwide metrics that will provide clear-cut goals for financial managers to monitor their progress in achieving reform. To begin this effort, the Comptroller adopted many of the DFAS performance measurement indicators because the DFAS metrics program had been underway for some time and was reporting successes. For payment recording errors and commercial payment backlogs in particular, the Comptroller’s metrics used information gathered and tracked by DFAS for its performance management contract. The metrics cited in the Comptroller’s testimony represent only a few of the financial management performance metrics developed to date. From a comprehensive set, the detailed metrics will be rolled up into “dashboard” metrics that will provide the Secretary of Defense and the Congress with a quick measure of DOD’s status in relation to critical financial management goals. This effort is part of an even larger effort by DOD to develop programmatic metrics for all of its operations. In general, the definitions and methodologies for gathering the data used by DOD Comptroller officials to calculate the cited improvement percentages at the ending measurement date were either consistent with or better than those used at the beginning measurement date or for prior reporting on payment recording errors, commercial payment backlogs, and travel card payment delinquencies. We did find that the reported metrics overstated the rate of improvement in some areas because Comptroller officials included transactions that DFAS would not consider to be payment errors or because they chose an inappropriate comparison to measure travel card delinquencies. However, recalculation of the metrics after correcting for these factors still showed positive—although less dramatic—improvement trends. DOD has gradually improved its reporting of payment recording errors over the years. DOD is now including all known categories of payment errors— unmatched disbursements, negative unliquidated obligations, intransits, and suspense account transactions—in its definition and, except in the case of intransits, is using absolute rather than net amounts in its calculations. However, the reporting of payment recording errors may not be complete. For example, work that we have performed on closed DOD accounts and on unliquidated obligations indicates that recording errors are not always identified or resolved appropriately. DFAS agrees that to properly manage and improve its payment processes, it must have a complete universe of payment recording errors. Therefore, DFAS personnel are currently working to determine whether the error categories identified to date contain all of the relevant transactions and whether other error categories exist. While the same basic methodologies were used for calculating the cited metrics at the beginning and ending measurement dates, Comptroller officials overstated DOD’s improvement percentages because the October 2000 calculation included transactions that did not meet the DFAS criteria for being considered payment errors while the October 2001 calculation did not include them. First, the October 2000 calculation for payment recording errors included all transactions that were being held in DFAS suspense accounts; however, DFAS uses certain suspense accounts to record collection transactions, such as accrued payroll taxes and receipts for the sale of military property, that are held temporarily before being distributed to the proper government agency or DOD entity. The transactions in these accounts, which DFAS labels as “exempt suspense accounts,” do not represent payment recording errors. In fiscal year 2001, DFAS Cleveland changed its practice of charging payroll taxes to suspense accounts and began appropriately accruing taxes in an accrued payroll tax account. As a result, payment recording errors as calculated by Comptroller officials at October 2001 were reduced by an estimated $7.5 billion—the amount of DFAS Cleveland’s accrued payroll taxes—even though payment processes were not improved at all. Second, in fiscal year 2001, DFAS Indianapolis corrected a reporting error by a defense agency that had been double-counting transactions in its suspense accounts. This resulted in an estimated $1.1 billion reduction from amounts reported in October 2000, even though no payment recording errors were corrected or resolved. In addition, Comptroller officials measured intransits using net rather than absolute values and did not adopt DFAS criteria for aging intransit and suspense account transactions. These practices affected the balances used to calculate the metrics at both the beginning and ending measurement dates. First, net rather than absolute values were used to calculate intransits at October 2000 and October 2001, which understated both balances by approximately $4 billion. When net amounts are reported, collections, reimbursements, and adjustments are offset against disbursements, thus reducing the balance of intransit transactions. Second, the reported metrics included all intransit and suspense account transactions at October 2000 and October 2001 regardless of their age. However, DOD allows 60 days to 180 days for the normal processing of various payment transactions because of systems limitations and the complexity of the department’s processes and, in line with these criteria, DFAS’s metrics related to payment errors only consider aged intransit and suspense account transactions. By not using DFAS’s criteria for aged intransit and suspense account transactions, the Comptroller officials overstated the balances of payment recording errors by approximately $6 billion at the beginning and $5 billion at the ending measurement dates. Figure 1 illustrates the effect on improvement rates of (1) eliminating exempt suspense accounts and double counting, (2) using DFAS’s criteria for aged intransits and suspense amounts, and (3) using absolute rather than net amounts for intransits. Our recalculation shows an overall 46 percent reduction in payment recording errors between October 2000 and October 2001 rather than the 57 percent reduction reported by the Comptroller; however, the reductions are still significant and the trend is still overwhelmingly positive. Between October 2001 and September 2002, DOD continued to report that it had reduced payment recording errors. Comptroller officials calculated a 26 percent reduction during that period while our recalculation shows a 22 percent reduction. The metrics for commercial payment backlogs (delinquent unpaid invoices) at April 2001 and October 2001 were calculated using consistent definitions and methodologies. An invoice was considered delinquent if payment was not made within the time frame established by the contract terms (e.g., by the 15th day after the invoice date) or, if no time frame was specified, on or before the 30th day after a proper invoice was received. DFAS reported information on delinquent invoices to Comptroller officials monthly using standardized input sheets. The total backlog percentages were then calculated by dividing the number of delinquent invoices outstanding by the total number of invoices on hand. According to the DOD Comptroller’s metrics, delinquent invoices for vendor pay decreased by 41 percent from April 2001 through October 2001 while delinquent invoices for contract pay decreased by 32 percent during that same period. Because DFAS officials stated that the decrease cited in the Comptroller’s metrics was primarily due to intensive focus placed on decreasing the backlog of delinquent vendor invoices, our review concentrated on vendor pay issues. For the travel card metrics, consistent definitions and methodologies were used to gather the data and calculate the improvement percentages cited by the DOD Comptroller for January 2001 and December 2001. Travel card payments were considered delinquent if they were not paid within 60 days of the monthly statement closing date. Even though the terms of the travel cardholder’s agreement with Bank of America requires payment of the statement within 30 days of the statement closing date, it is industry practice to allow 60 days before the invoice is considered delinquent and interest is charged. Comptroller officials used a standard industry practice to calculate the travel card delinquency rates— the total dollar amount outstanding for 60 days or more was divided by the total balance outstanding. While the definitions and methodology were consistent with standard practices, the metrics comparison of delinquencies for individually billed accounts in January to those in December could be misleading. As our recent work shows, individually billed travel card delinquencies have been cyclical, with the highest delinquencies occurring in January and February. Therefore, the most useful metrics would compare same month to same month, for example, January to January or December to December. If the Comptroller officials had compared individual travel card delinquencies at January 2001 to those at January 2002, the reported decrease would have been 16 percent as opposed to 34 percent. DFAS only provided us with internally generated summary-level data that reconciled to the totals reported for payment recording errors and commercial pay backlogs. DFAS did not provide us with detailed transaction-level data that supported those metrics. As a result, we were unable to test whether (1) all payment recording errors and delinquent commercial payments were properly included in the metrics and (2) the actions taken to resolve or correct payment recording errors were appropriate. For individual and centrally billed travel card delinquencies, we were able to obtain independent verification from a source outside DOD that supported the Comptroller’s metrics. Although we could not audit the reported metrics for all of the measured areas, we verified that DFAS and other DOD organizations have made numerous policy, procedure, and systems changes that would support an overall trend toward improved performance. For payment recording errors and commercial payment backlogs, perhaps the most significant change has been DOD’s inclusion of performance measures in its contracts with DFAS. The performance contract and an accompanying data dictionary provide specific, measurable reduction goals, which DFAS management— and in some cases staff—are held accountable for reaching. The resulting focus has fostered innovative process and systems improvements as well as better communication among the parties involved in preventing or resolving these problems. For example, DFAS holds monthly videoconferences with its centers and field sites to discuss progress and any impediments to reaching that period’s goals. In general, DFAS centers did not maintain history files of all the transactions that were not promptly matched with obligations, created negative unliquidated obligations, were in transit longer than allowable, or were in suspense accounts during the period October 2000 through October 2001—information that is necessary in order to verify the completeness and accuracy of the reported metrics. DFAS officials explained that the detailed data supporting the reported monthly totals are compiled by hundreds of DFAS field sites using numerous accounting systems and there is no specific requirement for the field sites to save the data. While some DFAS officials believe that it would be possible to recreate transaction-level detail to support month-end totals, the task would be extremely onerous and time consuming. Although we were unable to verify through audit procedures the accuracy of the reductions reported by the Comptroller, we did reconcile summary- level information provided by the DFAS centers to the metric amounts. We also verified that DFAS has made numerous policy and systems improvements that support a continuing trend of reductions in payment recording errors as illustrated by the metrics in figure 2. DFAS has been working to reduce payment recording errors for more than a decade. In the late 1990s, DFAS consolidated most of its disbursing and accounting functions from 300 defense accounting offices into 5 centers, in large part to help streamline the payment recording process. DFAS has also been working with other DOD components to consolidate or replace about 250 outdated and nonintegrated financial and accounting systems. While the systems effort will take many years and must be accomplished within DOD’s overall plan for systems development and integration, DFAS has made, and continues to make, improvements in the policies and systems tools available to DFAS personnel for preventing and correcting payment recording errors. Since October 2000, DFAS has made several policy changes that have affected the payment recording process. In January 2001, DOD revised its official guidance to clarify and strengthen policies related to the prompt (1) recording of disbursements and obligations and (2) resolution of payment recording errors. If the military services or DOD components have not provided DFAS with accurate obligation information within specified time frames, the revision gave DFAS the authority to record obligations in order to resolve individual unmatched disbursements, negative unliquidated obligations, and certain suspense account transactions. DFAS also expanded its prevalidation policy, which it claims has been key to reducing payment errors associated with commercial contracts. Prevalidation requires that DFAS personnel ascertain that there is a valid obligation recorded in the accounting records before making a payment. Between November 2000 and October 2001, DFAS lowered the dollar threshold amount for transactions requiring prevalidation from $100,000 to $25,000. DFAS developed new systems tools for communicating accounting information among its centers and field locations that have reduced the amount of time DFAS personnel need to match disbursements to obligations. For example, since the late 1990s DFAS has implemented the following. Electronic data access capability, which provides web access to contract, billing, and other documents pertinent to the payment recording process. Electronic access to these documents enables users to obtain information more quickly than in the past, when many documents were stored in hard-copy format. Phase 1 of the Defense Cash Accountability System (DCAS), which provides a standardized, electronic means for DFAS centers to report expenditure data for transactions involving more than one military service (cross-disbursements). Prior to DCAS, the centers had different systems and formats for reporting this information to one another and to Treasury, a situation that increased the complexity of recording and matching cross-disbursements. According to DFAS officials, DCAS reduced the cross-disbursement cycle time from 60 days to 10 days. The Standard Contract Reconciliation Tool (SCRT), which provides DFAS personnel a consolidated database for researching commercial contract records. Prior to SCRT, locating and accessing these records was difficult due to the variety of accounting, contracting, and entitlement systems involved. DFAS centers have also developed individual applications that have improved payment processes. For example, DFAS Indianapolis implemented an Access “Wizard” application to automate the process of matching intragovernmental expenditure transactions to obligation records. The program also enables center staff to identify transactions that have not been processed within 30 days so they can follow up with field accounting personnel. DFAS was unable to provide detailed transaction-level data that supported the metrics related to vendor payment backlogs—the most significant contributor to the reductions. DFAS only maintained summary-level data that were generated by the 23 DFAS field sites. Using standard definitions and standard summary spreadsheets, DFAS personnel collected the summary information monthly through data calls to the more than 15 different systems that track DOD vendor pay backlog information. As a result, we were only able to confirm that the summary information provided by DFAS reconciled to the amounts reported by the Comptroller. We were unable to verify by audit the accuracy or completeness of that data. DFAS management has focused on reducing commercial payment backlogs since fiscal year 2000 and this focus is continuing through the present. According to its performance contracts, DFAS’s goal was to reduce the backlog by 15 percent per year beginning in fiscal year 2000 from a baseline of 48,000 delinquent invoices. In April 2001, DFAS centralized operational control of contract pay and vendor pay under one executive, who was given ultimate responsibility for meeting these performance goals. DFAS also made site-specific procedural changes to reduce the backlog of vendor payments. These included hiring temporary contract and permanent staff in key sites, such as forecasting when civilian employees in Europe would be taking vacation and then staggering vacation leave and/or hiring temporary help (e.g., in Germany, every civilian employee has 6 weeks of annual leave, which is usually taken during the summer); and forming partnerships with the military services and defense agencies to improve their processing time for receiving reports, since DFAS must match the receiving report to the invoice before payment can be made. DFAS credits these and other changes for the continued reduction of the backlog of delinquent invoices. Figure 3 below illustrates the trend in the reduction of outstanding delinquent vendor invoices compared to the total number of invoices on-hand. We were able to verify the reductions cited by the Comptroller in individual and centrally billed travel card delinquencies. We obtained travel card delinquency information from an independent source, the General Services Administration (GSA), that supported the Comptroller’s metrics. GSA receives information from individual travel card vendors, such as Bank of America, and prepares a monthly summary report for DOD that documents individual and centrally billed travel card delinquencies by military service or defense agency. We compared the GSA data to the cited metrics and verified that the reported reductions in travel card delinquencies were accurate. As with the other problem areas, DOD credits the decrease in travel card delinquency rates in both individual and centrally billed accounts primarily to increased management attention. For the centrally billed accounts, DOD has attributed the initial high delinquency rates to problems in transferring the travel card contract from American Express to Bank of America. When Bank of America was given the contract, its on-line travel information system, EAGLS, was not fully operational and therefore was unable to accurately process all of the travel data being transferred by American Express. Because EAGLS contained incorrect account numbers, invoice information, and billing addresses, DOD agency program coordinators did not have the information necessary to determine which accounts were delinquent, in suspense, or canceled. While DOD and Bank of America officials were working jointly to identify and resolve the problems, centrally billed invoices became backlogged. Once the problems were resolved, DOD was able to reduce the backlog. As of December 31, 2002, DOD’s centrally billed delinquency rate was 1.5 percent, well below fiscal year 2002’s proposed goal of 3.0 percent and equal to the delinquency rate for other federal agencies. Figure 4 below shows the centrally billed delinquency rates from January 2001 through December 2002. For individual travel cards, our recent work also supports the improved delinquency rates being reported by DOD. During the past year, we reported on the travel card programs for all three military services. In general, we found that the military services, in particular the Air Force, have given delinquencies greater attention and have used travel card audits to identify problems and needed corrective actions. We reported that all of the services are now holding commanders responsible for managing the delinquency rates of their subordinates. For example, Air Force management holds monthly command meetings where individual travel card delinquencies are monitored and briefed. The individual services have also implemented new programs to help reduce delinquencies, including the following. In January 2003, the Army established two goals of not more than 4.5 percent of dollars delinquent and not more than 3 percent of accounts delinquent. The Navy has established a similar goal of no more than 4 percent delinquent accounts. The Air Force is providing financial training to all inductees that includes developing a personal budget plan, balancing a checkbook, preparing a tax return, and understanding financial responsibility. The training also covers the disciplinary actions and other consequences of financial irresponsibility by service members. The Navy has developed a three-pronged approach to address travel card issues: (1) provide clear procedural guidance to agency program coordinators (APCs) and travelers that is available on the Internet, (2) provide regular training to APCs, and (3) enforce the proper use and oversight of the travel card by using data mining to identify problem areas and abuses. In January 2003, the Army issued two directives to its major commanders, which address a range of policy requirements, to include: (1) training for APCs and cardholders, (2) monthly review of cardholder transactions, (3) exempting and/or discouraging the use of the card for en route travel expenses associated with deployments, and (4) prohibiting use of the card for travel expenses associated with permanent change of station moves. In addition, DOD has implemented a number of departmentwide programs to improve the individually billed travel card program. Beginning in November 2001, DOD began a salary and military retiree pay offset program for delinquencies—similar to wage garnishment. In March 2002, the Comptroller created a Credit Card Task Force to address management issues related to the purchase and individually billed travel card programs. On July 19, 2002, the DOD Comptroller directed the cancellation of (1) inactive travel charge card accounts, (2) active travel card accounts not used in the previous 12 months, and (3) travel card accounts for which the bank cannot identify the cardholders’ organization. DOD is also encouraging individual cardholders to elect to have all or part of their travel reimbursement sent directly by DFAS to Bank of America—a payment method that is standard practice for many private sector employers. The Congress has recently addressed this issue in section 1008(a) and (b) of the National Defense Authorization Act for Fiscal Year 2003, which provides the Secretary of Defense the authority to require use of this payment method. According to DOD, about 32 percent of its individually billed cardholders elected this payment option for fiscal year 2002. As a result of these and other actions, DOD has been able to sustain reduced delinquency rates between October 2002 and December 2002, as illustrated in figure 5 below. However, DOD still needs to do more to address the underlying causes of the problems with its travel card program. In a recent testimony, we concluded that actions to implement additional “front-end” or preventative controls are critical if DOD is to effectively address the high delinquency rates and charge-offs, as well as potentially fraudulent and abusive activity. As a result of our work on travel cards, the Congress included a provision in the Department of Defense Appropriations Act for Fiscal Year 2003 requiring the Secretary of Defense to evaluate whether an individual is creditworthy before authorizing the issuance of any government charge card. If this requirement is effectively implemented, DOD should continue to improve delinquency rates and reduce potential fraud and abuse. The metrics that the DOD Comptroller highlighted in the March 2002 hearing relate to areas that have received considerable congressional and audit attention. As discussed earlier, the metrics program increased management focus on these problem areas and led to improvements in policies, processes, and—in a limited way—systems. While some of the cited metrics could be effective indicators of short-term financial management progress, assuming they could be verified, others are not necessarily good indicators, particularly if taken alone. In addition, continued financial management progress will require additional actions. For example, the military services and other defense agencies are key contributors to preventing and resolving payment recording errors and commercial payment delinquencies but they do not have the same incentives to improve their performance in these areas. Also, because DFAS lacks modern, integrated financial management systems, preventing and resolving payment delinquencies and errors require intensive effort day after day by DFAS and other DOD organizations, which could be difficult to sustain. The cited metrics for individual travel card delinquencies and payment recording errors could be effective indicators of financial management improvement. For payment recording errors, continuing reductions would indicate better controls over obligation, disbursement, and collection processes and that, as a result, DOD is less prone to fraud, waste, or abuse of appropriated funds. Monitoring the delinquency rates for individual travel card payments would provide DOD with an early indication that employees may be abusing their cards (i.e., using the cards for personal purchases) or having credit problems. However, improved delinquency rates do not necessarily indicate improved financial management of centrally billed travel cards or commercial payments. In fact, by placing too much emphasis on paying bills promptly, DOD staff may be tempted to shortcut important internal control mechanisms that are meant to ensure that the goods and services being paid for were properly authorized and actually received. We and DOD auditors have issued several reports on the improper use of individually billed travel cards at DOD and on over- and underpayments to DOD contractors but are just beginning work to identify and evaluate the adequacy of DOD policies, procedures, and controls related to purchases from vendors and centrally billed travel cards. As a result of these audits, we will likely recommend additional metrics related to program performance and internal controls for monitoring performance in these areas. Measures such as the ones discussed in this report may be useful in the short term but may not be appropriate once DFAS has reengineered its business processes and modernized its systems. As DFAS and the military services develop integrated and/or interfaced financial management systems, many of the problems related to transaction recording errors should be eliminated. Based on the recent work we performed for your committee related to DOD’s enterprise architecture, however, these new systems are years away from implementation. Because DFAS lacks modern, integrated financial management systems, preventing and resolving payment delinquencies and errors require intensive effort day after day by DFAS and military service staff. As a result, DFAS has indicated that much of the reported progress to date is sustainable only if its workload is not significantly increased or its staffing significantly decreased. Until new systems and reengineered processes are in place, DOD can take a number of steps to help maintain improvements in these areas. First, continued leadership and focus by top management will be a major factor in the sustainability of progress made to date. Second, because DFAS alone cannot resolve DOD’s payment recording problems or payment delinquencies, integrated metrics programs across DOD will be important. As noted earlier in this report, while the military services and other defense agencies play key roles in obligating DOD funds, preparing obligation documents, receiving and preparing billing documents, preparing receiving reports, and recording transaction information into accounting systems, these organizations do not currently have complementary metrics programs. Thus the military services and defense agencies are not measured on the accuracy and timeliness of their payment processes even though their assistance is necessary for DFAS to make improvements and resolve problems. For example, commercial payment backlogs were largely due to failure by the military services in providing receiving reports to DFAS, yet service delays were not being measured. DOD is currently developing a departmentwide, balanced program of metrics that is intended to align with its strategic goals, focus on results, and achieve auditable reports. As contemplated, DFAS, the military services, and other defense agencies will all be supporting players in this program. From the individual performance measurement programs of the military services, defense agencies, and DFAS, certain metrics will be selected and reported to the top levels of DOD management for evaluation and comparison. In this scenario, it is important that DOD properly and consistently calculate and report the selected metrics and that the military services, other agencies, and DFAS develop integrated metrics programs to assist in identifying, measuring, and resolving crosscutting issues. As the cited metrics demonstrate, DOD can make meaningful, short-term progress toward better financial management while waiting for long-term solutions, such as integrated financial systems. Leadership, real incentives, and accountability—hallmarks of a good performance measurement program—have brought about improvements in DFAS policies and processes. The cited metrics are also serving as important building blocks for DOD’s current efforts to develop a departmentwide performance measurement system for financial management. However, before the payment recording error and commercial payment backlog metrics can be relied upon for decision-making purposes, they must be properly defined and correctly measured, and linked to the goals and performance measures of other relevant DOD organizations. In addition, because the reported improvements depend heavily on the day- to-day effort of DFAS staff, sustaining the progress may be difficult if DFAS has significant workload increases or staff decreases. DOD systems do not provide the transaction-level support needed to verify the accuracy and completeness of many of its selected metrics. However, because DOD is currently working on developing an enterprisewide system architecture to guide its future systems development and implementation strategies, we are not making any recommendations in this report related to improving the underlying business systems. We did identify several steps that DOD could take now to improve the reported metrics. We are recommending that the DOD Comptroller use definitions and criteria that are consistent with DFAS definitions and criteria when calculating and reporting metrics related to payment recording errors, measure improvements in individually billed travel card delinquencies by using same month to same month comparisons, and work with the military service Assistant Secretaries for Financial Management to develop performance measures for the military services and other defense agencies in areas for which there is shared responsibility, in order to complement the DFAS metrics program. In written comments on a draft of this report (see appendix II), the Under Secretary of Defense (Comptroller) stated that the department concurred with our recommendations and described actions to address them. The department also provided several technical comments, which we have incorporated in the report as appropriate. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Under Secretary of Defense (Comptroller); the Director, Defense Finance and Accounting Service; and the Assistant Secretaries for Financial Management (Comptroller) for the Army, the Navy, and the Air Force. Copies will be made available to others upon request. Please contact me at (202) 512-9505 or [email protected] if you or your staff have any questions about this report. Other GAO contacts and key contributors to this report are listed in appendix III. As requested by the Chairman and Ranking Minority Member of the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services, we undertook an assessment of the consistency, accuracy, and effectiveness of certain DOD-reported metrics related to payment recording errors, commercial payment backlogs, and delinquent travel card payments. Specifically, our objectives were to determine whether (1) the cited performance measures were applied and calculated in a manner consistent with previous reporting on payment delinquencies and recording errors, (2) the cited improvement data were properly supported and represent real improvements in performance, and (3) the metrics are effective indicators of short-term financial management progress. To complete this work, we visited DOD Comptroller offices and DFAS centers in Arlington, Cleveland, Columbus, Indianapolis, and Denver where we did the following. Gathered, analyzed, and compared information on how payment recording errors, commercial payment backlogs, and travel card delinquencies were defined, calculated, and reported both in the past and for the cited metrics. Reviewed GAO, DOD IG, and other service auditors’ reports for the past 10 years. Reviewed DOD consolidated financial statement reporting of payment recording errors over the last 10 years. Reviewed DOD policy for maintaining financial control over disbursement, collection, and adjustment transactions. This policy specifically describes the requirements for researching and correcting payment recording errors. Obtained and analyzed the underlying summary spreadsheets from DFAS that were the information source for the Comptroller officials’ calculations for payment recording errors and commercial pay backlogs. DFAS gathers this information monthly through data calls from numerous systems used to process and account for payments. Although we requested the underlying detailed transaction-level data supporting the spreadsheets so that we could perform audit tests, we were unable to obtain the detail-level data. Obtained and analyzed the underlying summary spreadsheets from DFAS that were the information source for the Comptroller officials’ calculations for travel card delinquencies. Obtained independent summary data for travel card delinquencies from GSA and compared amounts to Comptroller-reported metrics. Interviewed center personnel about process and system improvements and gathered and analyzed relevant output that demonstrated the results of those changes. Our review of new systems tools and purported systems improvements was limited: we did not validate whether systems changes followed appropriate requirements or whether they resulted in the production of reliable financial information. Obtained explanations from officials from the Office of the Secretary of Defense regarding the metrics program and assessed whether the cited metrics are effective indicators of short-term financial management progress. The data in this report are based on DFAS records. With the exception of travel card delinquency rates, we were unable to independently verify or audit the accuracy of these data. We performed our work from June 2002 to February 2003 in accordance with U.S. generally accepted government auditing standards. We received written comments on a draft of this report from the Under Secretary of Defense (Comptroller). These comments are presented and evaluated in the “Agency Comments and Our Evaluation” section and are reprinted in appendix II. We considered technical comments from the department and incorporated them as appropriate but did not reprint them. Staff making key contributions to this report were Rathi Bose, Steve Donahue, Diane Handley, Fred Jimenez, and Carolyn Voltz. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading.
The Department of Defense (DOD) has historically been unable to accurately account for and record its disbursements. In March 2002, the DOD Comptroller cited metrics that showed dramatic reductions in payment recording errors (57 percent between October 2000 and October 2001), backlogs of commercial payments (41 percent between April and October 2001), and travel card payment delinquencies (34 percent for those individually billed and 86 percent for those centrally billed between January and December 2001). As a result, the Congress asked us to determine whether the cited reductions were (1) calculated using consistent definitions and methodologies, (2) properly supported, and (3) effective indicators of short-term financial management progress. The DOD Comptroller's metrics showing significant reductions in payment recording errors and in commercial and travel card payment delinquencies were, in general, based on definitions and methodologies that were either consistent with or better than those used for prior reporting on these issues. Although the methodology used to calculate two of the cited measures resulted in overstating the rates of improvement, our recalculation after correcting for the methodology errors still showed positive--although less dramatic--improvement trends. While we were able to verify the reductions in travel card delinquencies because the underlying data were available from an independent source, we could not verify the accuracy of the specific improvement percentages reported for payment recording errors and commercial payment delinquencies. DOD's archaic and nonintegrated systems either do not contain the transaction-level detail to support the completeness and accuracy of the metrics or they make it extremely onerous and time consuming for the staff to gather and reconcile the needed detail. However, we were able to verify that DOD has made numerous policy, procedure, and systems changes that support an overall trend toward improved performance in these areas. If they could be verified, some of the cited metrics could be effective indicators of short-term financial management progress. However, if considered alone, delinquency rates are not necessarily good indicators for centrally billed travel cards or commercial payments. Placing too much emphasis on paying bills promptly may tempt DOD staff to bypass important internal controls meant to ensure that the goods and services being paid for were properly authorized and actually received. Despite shortcomings, the cited metrics have focused DOD's attention on highly visible financial management problems. As shown below, recent metrics issued by the DOD Comptroller indicate continuing improvements.
Established in 1956, DI is an insurance program that provides benefits to workers who are unable to work because of severe long-term disability. In 2001, DI provided $54.2 billion in cash benefits to 6.1 million disabled workers. Workers who have worked long enough and recently enough are insured for coverage under the DI program. DI beneficiaries receive cash assistance and, after a 24-month waiting period, Medicare coverage. Once found eligible for benefits, disabled workers continue to receive benefits until they die, return to work and earn more than allowed by program rules, are found to have medically improved to the point of having the ability to work, or reach full retirement age (when disability benefits convert to retirement benefits). To help ensure that only eligible beneficiaries remain on the rolls, SSA is required by law to conduct continuing disability reviews for all DI beneficiaries to determine whether they continue to meet the disability requirements of the law. SSI, created in 1972, is an income assistance program that provides cash benefits for disabled, blind, or aged individuals who have low income and limited resources. In 2001, SSI provided $19 billion in federal cash benefits to 3.8 million disabled and blind individuals age 18-64. Unlike the DI program, SSI has no prior work requirement. In most cases, SSI eligibility makes recipients eligible for Medicaid benefits. SSI benefits terminate for the same reasons as DI benefits, although SSI benefits also terminate when a recipient no longer meets SSI income and resource requirements (SSI benefits do not convert to retirement benefits when the individual reaches full retirement age). The law requires that continuing disability reviews be conducted for some SSI recipients for continuing eligibility. The Social Security Act’s definition of disability for adults under DI and SSI is the same: an individual must have a medically determinable physical or mental impairment that (1) has lasted or is expected to last at least 1 year or to result in death and (2) prevents the individual from engaging in substantial gainful activity. Moreover, the definition specifies that for a person to be determined to be disabled, the impairment must be of such severity that the person not only is unable to do his or her previous work but, considering his or her age, education, and work experience, is unable to do any other kind of substantial work that exists in the national economy. SSA regulations and guidelines provide further specificity in determining eligibility for DI and SSI benefits. For instance, SSA has developed the Listing of Impairments (the Medical Listings) to describe medical conditions that SSA has determined are severe enough ordinarily to prevent an individual from engaging in substantial gainful activity. SSA has also developed a procedure to assess applicants who do not have an impairment that meets or equals the severity of the Medical Listings. The procedure helps determine whether an applicant can still perform work done in the past or other work that exists in the national economy. While not expressly required by law to update the criteria used in the disability determination process, SSA has stated that it would update them to reflect current medical criteria and terminology. Over the years, SSA has periodically taken steps to update its Medical Listing. The last general update to the Medical Listing occurred in 1985. In 2000, the most common impairments among DI’s disabled workers were mental disorders and musculoskeletal conditions (see fig.1). These two conditions also were the fastest growing conditions since 1986, increasing by 7 and 5 percentage points, respectively. In 2000, the most common impairments among the group of SSI blind and disabled adults age 18-64 were mental disorders and mental retardation (see fig. 2). Mental disorders was the fastest growing condition among this population since 1986, increasing by 9 percentage points. Scientific advances, changes in the nature of work, and social changes have generally enhanced the potential for people with disabilities to work. Medical advancements and assistive technologies have given more independence to some individuals. Moreover, the economy has become more service- and knowledge-based, presenting both opportunities and some new challenges for people with disabilities. Finally, social changes have altered expectations for people with disabilities. For instance, the Americans with Disabilities Act fosters the expectation that people with disabilities can work and have the right to work. Recent scientific advances in medicine and assistive technology and changes in the nature of work and the types of jobs in our national economy have generally enhanced the potential for people with disabilities to perform work-related activities. Advances in medicine have led to a deeper understanding of and ability to treat disease and injury. Medical advancements in treatment (such as organ transplantations), therapy, and rehabilitation have reduced the functional limitations of some medical conditions and have allowed individuals to live and work with greater independence. Also, assistive technologies—such as advanced wheelchair design, a new generation of prosthetic devices, and voice recognition systems—afford greater capabilities for some people with disabilities than were available in the past. At the same time, the nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge-based employment. In the 1960s, earning capacity became more related to a worker’s skills and training than to his or her ability to perform physical labor. Following World War II and the Korean Conflict, advancements in technology, including computers and automated equipment, reduced the need for physical labor. The goods-producing sector’s share of the economy—mining, construction, and manufacturing—declined from about 44 percent in 1945 to about 18 percent in 2000. The service-producing industry’s share, on the other hand—such areas as wholesale and retail trade; transportation and public utilities; federal, state and local government; and finance, insurance, and real estate—increased from about 57 percent in 1945 to about 72 percent in 2000. Although there may be more an individual with a disability can do in today’s world of work than was available when the DI and SSI programs were first designed, today’s work world is not without demands. Some jobs require standing for long hours, and other jobs, such as office work, require social abilities. These characteristics can pose particular challenges for some persons with certain physical or mental impairments. Moreover, other trends—such as downsizing and the growth in contingent workers—can limit job security and benefits, like health insurance, that most persons with disabilities require for participation in the labor force. Whether these changes make it easier or more difficult for a person with a disability to work appears to depend very much on the individual’s impairment and other characteristics, according to experts. Social change has promoted the goals of greater inclusion of and participation by people with disabilities in the mainstream of society, including adults at work. For instance, over the past 2 decades, people with disabilities have sought to remove environmental barriers that impede them from fully participating in their communities. Moreover, the Americans with Disabilities Act supports the full participation of people with disabilities in society and fosters the expectation that people with disabilities can work and have the right to work. The Americans with Disabilities Act prohibits employers from discriminating against qualified individuals with disabilities and requires employers to make reasonable workplace accommodations unless it would impose an undue hardship on the business. The disability criteria used in the DI and SSI disability programs to help determine who is qualified to receive benefits have not been fully updated to reflect these advances and changes. SSA is currently in the midst of a process that began around the early 1990s to update the medical criteria they use to make eligibility decisions, but the progress is slow. Moreover, some changes resulting from treatment advances and assistive technologies are not fully incorporated into the decision-making process due to program design. In addition, the disability criteria have not incorporated labor market changes. In determining the effect that impairments have on individuals’ earning capacity, SSA continues to use outdated information about the types and demands of jobs in the economy. SSA’s current effort to update the disability criteria began in the early 1990s. Between 1991 and 1993, SSA published for public comment the changes it was proposing to make to 7 of the14 body systems in its Medical Listings. By 1994, the proposed changes to 5 of these 7 body systems were finalized. The agency’s efforts to update the Medical Listings were curtailed in the mid-1990s due to staff shortages, competing priorities, and lack of adequate research on disability issues. SSA resumed updating the Medical Listings in 1998. Since then, SSA has taken some positive steps in updating portions of the medical criteria it uses to make eligibility decisions, although progress is slow. As of early 2002, SSA has published the final updated criteria for 1 of the 9 remaining body systems not updated in the early 1990s (musculoskeletal) and a portion of a second body system (mental disorders). SSA also plans to update again the 5 body systems that were updated in the early 1990s. In addition, SSA has asked the public to comment on proposed changes for several other body systems. After reviewing the schedule and timing for the revisions, SSA recently pushed back the completion date for publishing proposed changes for all remaining body systems to the end of 2003. The revised schedule does not list target dates, with one exception, for submitting changes for final clearance to the Office of Management and Budget. SSA’s slow progress in completing the updates could undermine the purpose of incorporating medical advances into its medical criteria. For example, the criteria for musculoskeletal conditions—a common impairment among persons entering DI—were updated in 1985. Then, in 1991, SSA began developing new criteria and published its proposed changes in 1993 but did not finalize the changes until 2002; therefore, changes made to the musculoskeletal criteria in 2002 were essentially based on SSA’s review of the field in the early 1990s. SSA officials told us that in finalizing the criteria, they reviewed the changes identified in the early 1990s and found that little had taken place since then to warrant changes to the proposed criteria. However, given the advancements in medical science since 1991, it may be difficult for SSA to be certain that all applicable medical advancements are in fact included in the most recent update. SSA has made various types of changes to the Medical Listings thus far. As shown in table 1, these changes, including the proposed changes released to the public for comment, add or delete qualifying conditions; modify the criteria for certain physical or mental conditions; and clarify and provide additional guidance in making disability decisions. Examples Remove peptic ulcer.Add inflammatory bowel disease by combining two existing conditions already listed: chronic ulcerative and regional enteritis. Expand the types of allowable imaging techniques. Reduce from three to two in the number of difficulties that must be demonstrated to meet the listings for a personality disorder.Remove discussion on distinction between primary and secondary digestive disorders resulting in weight loss and malnutrition. Expand guidance about musculoskeletal “deformity.” Rationales Advances in medical and surgical management have reduced severity. Reflect advances in medical terminology. The Medical Listings previously referred to x-ray evidence. With advancements in imaging techniques, SSA will also accept evidence from, for example, computerized axial tomography (CAT) scan and magnetic resonance imaging (MRI) techniques. Specific rationale not mentioned. Distinction not necessary to adjudicate disability claim. Clarify that the term refers to joint deformity due to any cause. Despite these changes, program design issues have limited the extent that advances in medicine and technology have been incorporated into the DI and SSI disability decision-making criteria. The statutory and regulatory design of these programs limits the role of treatment in deciding who is disabled. Unless an individual has been prescribed treatment, SSA does not consider the possible effects of treatment in the disability decision, even if the treatment could make the difference between being able and not being able to work. Thus, treatments that can help restore functioning to persons with certain impairments may not be factored into the disability decision for some applicants. For example, medications to control severe mental illness, arthritis treatments to slow or stop joint damage, total hip replacements for severely injured hips, and drugs and physical therapies to possibly improve the symptoms associated with multiple sclerosis are not automatically factored into SSA’s decision making for determining the extent that impairments affect people’s ability to work. Additionally, this limited approach to treatment raises an equity issue: Applicants whose treatment allows them to work could be denied benefits while applicants with the same condition who have not been prescribed treatment could be allowed benefits. As with treatment, the benefits of innovations in assistive technologies— such as advanced prosthetics and wheelchair designs—have not been fully incorporated into DI and SSI disability criteria because the design of these programs does not recognize these advances in disability decision making. For example, SSA does not require an applicant who lost a hand to use a prosthetic before the agency makes its decision about the impact of this condition on the ability to engage in substantial gainful activities. For an applicant who does not have an impairment that meets or equals the severity of the Medical Listings, SSA evaluates whether the individual is able to work despite his or her limitations. Specifically, an individual who is unable to perform his or her previous work and other work in the labor market is awarded benefits. SSA relies upon the Department of Labor’s Dictionary of Occupational Titles (DOT) as its primary database to help make this determination. However, Labor has not updated DOT since 1991 and does not plan to do so. Although Labor has been working on a replacement for the DOT called the Occupational Information Network (O*NET) since 1993, Labor and SSA officials recognize that O*NET cannot be used in its current form in the DI and SSI disability determination process. The O*NET, for example, does not contain SSA-needed information on the amount of lifting or mental demands associated with particular jobs. The agencies have discussed ways that O*NET might be modified or supplemental information collected to meet SSA’s needs, but no definitive solution has been identified. Absent such changes to the O*NET, SSA officials have indicated that an entirely new occupational database could be needed to meet SSA’s needs, but such an effort could take many years to develop, validate, and implement. Meanwhile, as new jobs and job requirements evolve in the national economy, SSA’s reliance upon an outdated database further distances the agency from the current market place. In order to incorporate the medical, economic, and social advances and changes into the programs’ disability criteria, some steps can be taken within the existing program design, while others would require more fundamental changes. Within the context of the current statutory and regulatory framework, SSA will need to continue to update the medical portion of the disability criteria and vigorously expand its efforts to examine labor market changes. However, in addition, policymakers and agency officials could look beyond the traditional concepts that underlie the DI and SSI programs to re-examine the core elements of federal disability programs. This broader approach would raise a number of significant policy issues, and more information is needed to address them. To this end, approaches taken by private disability insurers offer useful insights. Within the context of the programs’ existing statutory and regulatory design, SSA will need to further incorporate advances and changes in medicine and the labor market. That is, SSA should continue to update the criteria used to determine which applicants have physical and mental conditions that limit their ability to work. As we noted above, SSA began this type of update in the early 1990s, although the agency’s efforts have focused much more on the medical portion than labor market issues. In addition to continuing the medical updates, SSA will need to vigorously expand its efforts to more closely examine labor market changes. SSA’s results could yield updated information used to make decisions about whether or not applicants have the ability to perform their past work or any work that exists in the national economy. More fundamentally, the recent scientific advances and labor market changes discussed earlier raise issues about the programs’ basic design, goals, and orientation in an economy increasingly different from that which existed when these programs were first designed. Whereas the programs currently are grounded in assessing and providing benefits based on individuals’ incapacities, fully incorporating recent advances and changes could result in SSA assessing individuals with physical and mental conditions with a focus on their capacity to work and then providing them with, or helping them obtain, needed assistance to improve their capacity to work. Moreover, reorienting programs in this direction is consistent with increased expectations of people with disabilities and the integration of people with disabilities into the workplace, as reflected in the Americans with Disabilities Act. We have recommended in prior reports that SSA place a greater priority on work, design more effective means to more accurately identify and expand beneficiaries’ work capacities, and develop legislative packages for those areas where the agency does not have legislative authority to enact change. However, for people with disabilities who do not have a realistic or practical work option, long-term cash support would remain the best option. In reexamining the fundamental concepts underlying the design of the DI and SSI programs, approaches used by other disability programs may offer some valuable insights. For example, our prior review of three private disability insurers shows that they have fundamentally reoriented their disability systems toward building the productive capacities of people with disabilities, while not jeopardizing the availability of cash benefits for people who are not able to return to the labor force. These systems have accomplished this reorientation while using a definition of disability that is similar to that used by SSA’s disability programs. However, it is too early to fully measure the effect of these changes. In these private disability systems, the disability eligibility assessment process evaluates a person’s potential to work and assists those with work potential to return to the labor force. This process of identifying and providing services intended to enhance a person’s productive capacity occurs early after disability onset and continues periodically throughout the duration of the claim. In contrast, SSA’s eligibility assessment process encourages applicants to concentrate on their incapacities, and return-to-work assistance occurs, if at all, only after an often lengthy process of determining eligibility for benefits. SSA’s process focuses on deciding who is impaired sufficiently to be eligible for cash payments, rather than on identifying and providing the services and supports necessary for making a transition to work for those who can. While cash payments are important to individuals, the advances and changes discussed in this testimony suggest the option to shift the disability programs’ priorities to focus more on work. Reorienting the DI and SSI programs would have implications on their core elements—eligibility standards, the benefits structure, and the access to and cost of return-to-work assistance. We recognize that re-examining the programs at the broader program level raises a number of profound policy questions, including the following: Program design and benefits offered - Would the definition of disability change? Would some beneficiaries be required to accept assistance to enhance work capacities as a precondition for benefits versus relying upon work incentives, time-limited benefits, or other means to encourage individuals to maximize their capacity to work? What can SSA accomplish through the regulatory process and what requires legislative action? Accessibility and cost - Are new mechanisms needed to provide sufficient access to needed services? In the case of DI and SSI, what is the impact on the ties with the Medicare and Medicaid programs? Who will pay for the medical and assistive technologies and will beneficiaries be required to defray costs? Would the cost of providing treatment and assistive technologies in the disability programs be higher than cash expenditures paid over the long-term? Will net costs show that some expenditures could be offset with cost savings by paying reduced benefits?
Since the Disability Insurance (DI) and Supplemental Security Income (SSI) programs began, much has changed and continues to change in medicine, technology, the economy, and societal views and expectations of people with disabilities. GAO found that scientific advances, changes in the nature of work, and social changes have generally enhanced the potential for people with disabilities to work. Medical advances, such as organ transplantation, and assistive technologies, such as advances in wheelchair design, have given more independence to some individuals. At the same time, a service- and knowledge-based economy has opened new opportunities for people with disabilities, and societal changes have fostered the expectation that people with disabilities can work and have the right to work. GAO further found that DI and SSI disability criteria have not kept pace with these advances and changes. Depending on the claimant's impairment, decisions about eligibility benefits can be based on both medical and labor market criteria. Finally, some steps to incorporate these advances and changes can be taken within the existing programs' design, but some would require more fundamental changes.
U.S. interests in South Korea involve a wide range of security, economic, and political concerns. The United States has remained committed to maintaining peace on the Korean Peninsula since the 1950 to 1953 Korean War. Although most of the property that the United States once controlled has been returned to South Korea, the United States maintains about 37,000 troops in South Korea, which are currently scattered across 41 troop installations and an additional 54 small camps and support sites. According to U.S. Forces Korea officials, many of the facilities there are obsolete, poorly maintained, and in disrepair to the extent that the living and working conditions in South Korea are considered to be the worst in the Department of Defense (DOD). We observed many of these conditions during our visits to U.S. facilities and installations in South Korea. While improvements have been made in recent years, U.S. military personnel still use, as shown in figure 1, some Korean War-era Quonset huts for housing. Improving overall facilities used by the United States in South Korea will require an enormous investment. At the same time, rapid growth and urbanization in South Korea during the last several decades have created a greater demand for land and increased encroachments on areas used by U.S. forces. Consequently, many of the smaller U.S. camps and training areas that were originally located in isolated areas are now in the middle of large urban centers, where their presence has caused friction with local residents; urban locations also limit the ability of U.S. forces to train effectively. Figure 2 shows the boundaries of Yongsan Army Garrison and other U.S. installations that have become encircled by the city of Seoul. Historically, DOD reports difficulties filling its military personnel assignments in South Korea, which are generally 1-year hardship tours in which 90 percent of the assigned military personnel are unaccompanied by their families. A DOD survey conducted in 2001 found that Army and Air Force personnel considered South Korea as the least desirable assignment and that many soldiers were avoiding service in South Korea by various means, including retirement and declining to accept command assignments. U.S. Forces Korea has wanted to make South Korea an assignment of choice by improving living and working conditions, modifying assignment policies to increase accompanied tours to 25 percent by 2010, and reducing the out-of-pocket expenses for personnel to maintain a second household in South Korea. To address these problems, military officials from the United States and South Korea signed the Land Partnership Plan on March 29, 2002. The LPP, as originally approved, was described as a cooperative U.S.-South Korean effort to consolidate U.S. installations and training areas, improve combat readiness, enhance public safety, and strengthen the U.S.-South Korean alliance. The United States views the plan as a binding agreement under the Status of Forces Agreement, not as a separate treaty. However, U.S. Forces Korea officials told us that South Korea views the plan as a treaty requiring approval by the South Korea National Assembly and that approval occurred on October 30, 2002. The three components of the plan are as follows: Installations—establishes a timeline for the grant of new land, the construction of new facilities, and the closure of installations. The plan calls for the number of U.S. military installations to drop from 41 to 23. To accomplish this, the military will close or partially close some sites, while enlarging or creating other installations. Training areas—returns training areas in exchange for guaranteed time on South Korean ranges and training areas. The plan calls for the consolidation and protection of remaining U.S. training areas. Safety easements—acknowledges that South Korean citizens are at risk of injury or death in the event of an explosion of U.S. weapons, provides a prioritized list of required safety easements, and establishes a procedure and timeline for enforcing the easements. The costs of the LPP must be shared between the United States and South Korea. U.S. funding is provided from the military construction and operations and maintenance accounts and from nonappropriated funds. The South Korean government provides host nation funds and funding obtained from sales of property returned to South Korea by the United States. As a general rule, the United States funds the relocation of units from camps that it wishes to close, and South Korea funds the relocation of units from camps South Korea has asked to be closed. The execution of the LPP is shown on figure 3. The target date for the completion of the LPP was December 31, 2011, although the timetable and the scale could be adjusted by mutual agreement. More information on the plan as originally envisioned is included in appendix II. U.S. military infrastructure funding in South Korea involves multiple organizations and sources. It involves 10 organizations from the United States (Army, Navy, Air Force, Marine Corps, Special Operations, Army and Air Force Exchange Service, Defense Logistics Agency, Department of Defense Dependents School, Medical Command, and Defense Commissary Agency), as well as construction funded by South Korea. These organizations provide funding for military construction using five different sources of money—U.S. military construction funds, U.S. operations and maintenance funds, U.S. nonappropriated funds, South Korea-funded construction, and South Korea combined defense improvement program funding. Figure 4 shows the sources of funding for $5.6 billion that, until recently, was planned for infrastructure construction costs for U.S. installations in South Korea during the 2002 through 2011 time frame. Most of the approximately $2 billion projected cost of implementing the plan was expected to be paid for by the government of South Korea, with much of it financed through land sales from property returned by the United States. Figure 5 shows all planned funding sources and amounts for the plan. More information on funding and sequencing actions associated with the LPP, as originally approved, is included in appendix II. A wide array of military operations-related facilities (command and administrative offices, barracks, and maintenance facilities) and dependent-related facilities and services (family housing units; schools; base exchanges; morale, welfare, and recreation facilities; child care programs; and youth services) have recently been constructed or are in the process of being constructed in South Korea. Typically, as U.S. installations overseas are vacated and turned over to host governments, the status of forces agreements between the United States and host governments address any residual value remaining, at the time of release, of construction and improvements that were financed by the United States. The agreement in South Korea differs from the agreements used in some other overseas locations where the United States receives residual value for returned property—such as currently in Germany—in that South Korea is not obliged to make any compensation to the United States for any improvements made in facilities and areas or for the buildings and structures left there. In recent months, political dynamics in South Korea have been changing as DOD has been reassessing future overseas basing requirements. According to U.S. Forces Korea officials, there have always been groups in South Korea that have criticized the U.S. presence and have claimed that the U.S. presence hinders reconciliation between North and South Korea. Demonstrations against American military presence increased sharply during last year’s South Korean presidential election. South Koreans were angered in November 2002 by a U.S. military court’s acquittal of two American soldiers charged in association with a tragic training accident that claimed the lives of two South Korean schoolgirls in June 2002. The South Korean government wanted the two American soldiers who had been operating the vehicle involved in the accident turned over to South Korean authorities; however, they were tried in a U.S. military court. As a result, South Koreans demonstrated against U.S. forces in Korea, carried out isolated violence directed at U.S. soldiers, and practiced discrimination against Americans (such as businesses refusing to serve them). Subsequently, other groups demonstrated in support of the U.S. government. At the same time, the United States and South Korea were working to strengthen their alliance and to address issues involving North Korea’s active nuclear weapons program and the proliferation of its missile programs. In December 2002, the Secretary of Defense and the Defense Minister of South Korea agreed to conduct a Future of the Alliance study to assess the roles, missions, capabilities, force structure, and stationing of U.S. forces, including having South Korea assume the predominant role in its defense and increasing both South Korean and U.S. involvement in regional security cooperation. The results of the Future of the Alliance study are not expected until later this year. In February 2003, the Secretary of Defense testified before the Congress that the United States was considering the relocation of U.S. troops now based within and north of Seoul, including those near the demilitarized zone. Consideration of such a move would be in keeping with a broader reassessment of U.S. presence overseas that is now underway. In April 2003, the Deputy Assistant Secretary of Defense for Asian and Pacific Affairs and other U.S. officials met with officials of the South Korean Ministry of National Defense to discuss redeploying U.S. troops and relocating key military bases in South Korea. Following these discussions, the U.S. and Korean press reported that the United States would relocate from Yongsan Army Garrison in Seoul to an area located south of Seoul. According to the U.S. Deputy Assistant Secretary of Defense for Asian and Pacific Affairs, both South Korea and the United States have decided that this is an issue that cannot wait any longer for resolution. U.S. and South Korean officials are expected to hold more discussions to finalize the realignment of U.S. troops by fall 2003. Moreover, the Secretary of Defense has recently directed acceleration on work that began during the development of the 2001 Quadrennial Defense Review, related to the global positioning of U.S. forces and their supporting infrastructure outside the United States. In March 2003, the Secretary of Defense requested that the Under Secretary of Defense for Policy and the Chairman, Joint Chiefs of Staff, develop a comprehensive and integrated presence and basing strategy for the next 10 years. An Integrated Global Presence and Basing Strategy will build upon multiple DOD studies, including the Overseas Basing and Requirements Study, the Overseas Presence Study, and the U.S. Global Posture Study. In addition, the Integrated Global Presence and Basing Strategy will use information from the combatant commanders to determine the appropriate location of the infrastructure necessary to execute U.S. defense strategy. The Integrated Global Presence and Basing Strategy is not expected to be completed until the summer of 2003. However, we were recently told by DOD officials that the United States will likely concentrate its forces in South Korea in far fewer, though larger, installations than were initially envisioned under the LPP, and that over time the forces now located north of Seoul will be relocated south of Seoul. Although the Land Partnership Plan as approved was broad in scope, it was designed to address only a portion of the U.S. military’s previously existing infrastructure needs in South Korea, and it left unresolved a number of significant land disputes. Specifically, the LPP covered about 37 percent of the construction costs planned at U.S. military installations in South Korea over the next 10 years, encompassing about $2 billion of the $5.6 billion that the U.S. military and South Korea planned to spend to improve the U.S. military infrastructure in South Korea from 2002 through 2011. It was intended to resolve 55 percent, or 49, of the 89 separate land disputes that were pending in South Korea in January 2003, which was considered a significant step forward. One example of a land dispute that would be resolved under the LPP involves Camp Hialeah, located on the southern tip of the Korean peninsula in the port city of Pusan, South Korea’s second largest city. According to press reports, South Korea wanted this base returned because of its proximity to the port and the impediments it posed to urban redevelopment. However, no relocation agreement could be reached until the LPP included an agreement to begin relocating Camp Hialeah’s functions to a new site in Noksan, South Korea, in 2008 and to close Camp Hialeah in 2011. According to press reports attributed to an official from the South Korean Ministry of Foreign Affairs and Trade, relocating in-city bases like Camp Hialeah would help lessen the potential tension between U.S. forces and neighboring communities. Although the plan was considered a major step forward, it was not designed to resolve a number of significant land disputes. As far back as far as 1982, negotiations over some land returns have been deadlocked and left unresolved. For example, the relocation of Yongsan Army Garrison remained unresolved because of its projected financial cost to South Korea. The relocation of the garrison has been and continues to be a politically sensitive, complex, and expensive issue for U.S. Forces Korea and the South Korean government. In 1991, the governments of the United States and South Korea signed an agreement to relocate the garrison by 1996. In 1993, the plan was suspended, largely because of the anticipated high cost and the lack of alternative locations for the garrison. More than a decade later, the relocation of Yongsan is an ongoing, contentious issue. Since the 1990s, U.S. military and South Korean officials have held discussions on moving the military base out of the city, including screening various suburb locations. In December 2002, the United States and South Korea agreed on the need to find a mutually acceptable way to relocate U.S. forces outside the city of Seoul as a result of the Future of the Alliance Study. DOD has had many construction projects underway in South Korea, both within and outside of the LPP. However, DOD-sponsored studies now underway examining future overseas presence requirements are likely to significantly change the number and locations for U.S. military bases in South Korea. As noted, we were recently told that the United States will likely concentrate its forces in far fewer, though larger, installations than were envisioned under the LPP and that, over time, the forces would be relocated south of Seoul. Therefore, a number of sites and facilities retained under the LPP are likely to be affected. Figure 6 shows the locations of U.S. troop installations in South Korea under the LPP, as originally approved. Except as otherwise provided by the LPP, South Korea is not obliged to compensate the United States for any improvements made in facilities and areas or for the buildings and structures left behind. This could be particularly important because of military infrastructure projects planned or underway in areas from which the United States is considering relocating its troops, including Seoul’s Yongsan Army Garrison and U.S. installations located north of Seoul, which, according to a U.S. Forces Korea official, had recently represented $1.3 billion in ongoing or planned construction projects. For example, construction projects in Yongsan included apartment high-rises for unaccompanied soldiers, a hospital, a sports and recreation complex, a mini-mall, and an overpass between Yongsan’s main and south posts. We discussed with U.S. Forces Korea officials the need to reassess construction projects under way or planned in South Korea and to delay the execution of some projects until better decision-making information becomes available. Subsequently, U.S. Forces Korea officials announced that they were reviewing all projects and that over $1 billion in ongoing and planned construction had been put on hold. Further, DOD recently submitted an amendment to the President’s fiscal year 2004 budget to the Congress to cancel about $5 million of construction projects planned for the garrison and to redirect $212.8 million of construction planned for the garrison and northern installations to an installation located south of Seoul. During the initial phase of our review we identified funding and other management challenges that could adversely affect the implementation of the Land Partnership Plan. As we considered these issues in light of the potential for even greater basing changes, we recognized that they could also affect the associated U.S. military construction projects throughout South Korea. First, the LPP is dependent on substantial amounts of funding that South Korea expects to realize through land sales from property returned by the United States, host-nation-funded construction, and U.S. military construction funds. While U.S. Forces Korea officials expect to build on this LPP framework for likely additional basing changes, the details have not been finalized for the broader changes. As U.S. Forces Korea revises its plans, competition for limited funding for other priorities could become an issue. Second, U.S. Forces Korea does not have a detailed road map to manage current and future facilities requirements in South Korea. The LPP, as originally approved, was dependent on substantial amounts of South Korean funding to be realized through land sales, host-nation- funded construction, and U.S. military construction funds. The extent to which these sources of funding would be required and available for broader infrastructure changes is not yet clear, particularly for the relocation of Yongsan Army Garrison. While U.S. officials expect the South Korean government to fund much of the cost of these additional basing changes, details have not yet been finalized. The South Korean government is also expected to remain responsible for providing funding for the relocation of forces now based at the Yongsan Army Garrison property, although those costs could be reduced by the fact that a residual number of U.S. and United Nations personnel are expected to remain at Yongsan. It should also be noted that the Yongsan Garrison property is expected to be used for municipal purposes and is not subject to resale to provide funding to support relocation of U.S. forces. At this point, insufficient information is available to determine precisely how many replacement facilities will be required for U.S. troops moving out of Yongsan Garrison and to anticipate any difficulties that might be encountered in obtaining the funding. However, if South Korea encounters problems or delays in acquiring needed lands and providing replacement facilities, future projects could be delayed. Figure 7 presents the amount of funding, as of May 2003, that the United States and South Korean governments expected to pay for the LPP—as originally approved—by fiscal year. The funding amounts for fiscal year 2004 and beyond are subject to revision. The LPP, as originally approved, was dependent on designating up to 50 percent of South Korea’s host nation funding for construction. Historically, the stability of host nation funding from South Korea has been subject to some uncertainty because international economic factors have played a part in determining the level of funding. South Korea host nation payments are paid in both South Korean won and U.S. dollars; consequently, a downturn in the South Korean economy or a sharp fluctuation in the South Korean currency could affect the South Korean government’s payments. For example, during South Korea’s economic downturn in 1998, host nation payments were less than expected (the United States received from South Korea $314.2 million of the $399 million that had been agreed to). Designating up to 50 percent of host nation funding for the LPP would also limit funding for readiness and other needs. Non-LPP readiness-related infrastructure funding shortages previously identified in readiness reports at the time of our visit to South Korea in November 2002 were estimated to be in the hundreds of millions of dollars and represented competing requirements for limited funding. Such needs included Air Force facilities at Osan and Kunsan ($338.2 million), Navy facilities at Pohang and Chinhae ($10.3 million), and Army facilities at Humphreys, Carroll, and Tango ($25.2 million). Recently, U.S. Forces Korea officials have also expressed the desire to increase from 10 percent to 25 percent the number of servicemembers in South Korea who are permitted to be accompanied by their families. While these expressions have not been finalized, such an increase could be expected to cause a significant increase in the demand for housing, schools, and other support services and could result in greater competition for U.S. and Korean funding. For example, U.S. Forces Korea officials estimated that the increased demand for housing alone would cost $900 million in traditional military construction funding and, to reduce costs, officials were exploring a build-to-lease program using Korean private-sector funding and host-nation-funded construction, where possible. In the past, funding from U.S. military construction accounts, which represent 13 percent of funding for the LPP as originally approved, has fluctuated. From 1990 through 1994, U.S. forces in South Korea did not receive any military construction funds, resulting in a significant backlog of construction projects. Implementation of the LPP was expected to involve a closely knit series of tasks to phase out some facilities and installations while phasing in new facilities and expanding other facilities and installations. U.S. Forces Korea was developing an implementation plan for each installation encompassed by the LPP and, at the time of our visit there, was developing a detailed, overarching implementation plan capable of integrating and controlling the multiple, sometimes simultaneous, actions needed to relocate U.S. forces and support their missions. According to U.S. Forces Korea officials, such a master plan is needed to accomplish training, maintain readiness, and control future changes. During our visits to U.S. installations in South Korea, we found that, in the absence of a completed master plan for implementation, installation commanders had varying interpretations of what infrastructure changes were to occur. U.S. Forces Korea officials told us that this was not unusual, given that detailed implementation plans were still being developed. At the same time, these officials emphasized the need for a detailed plan to guide future projects and to help minimize the costly changes that can occur when subsequent commanders have a different vision of the installations’ needs than their predecessors, which could lead to new interpretations of the LPP and more changes. In light of the potentially broader repositioning of forces in South Korea, the master plan under development could be substantially changed; thus, a significantly revised road map will be needed to manage future facilities requirements and changes in South Korea. As approved, the Land Partnership Plan represented an important step to reduce the size of the U.S. footprint in South Korea by leveraging the return of facilities and land to South Korea in order to obtain replacement facilities in consolidated locations. However, subsequent events suggest the LPP, as originally outlined, will require significant modification. Available data indicate that changes in the U.S. basing structure in South Korea are likely; therefore, a significant portion of the $5.6 billion in construction projects planned over the next 10 years is being reassessed based on currently expected basing changes and may need to be further reassessed when the results of ongoing overseas presence and basing studies are completed. The LPP was to require 10 years of intensive management to ensure implementation progressed as planned. The master plan U.S. Forces Korea officials are developing to guide its implementation will require significant revision to accommodate the more comprehensive changes in basing now anticipated and to identify funding requirements and division of funding responsibilities between the United States and South Korea. We recommend that the Secretary of Defense require the Commander, U.S. Forces Korea, to (1) reassess planned construction projects in South Korea as the results of ongoing studies associated with overseas presence and basing are finalized and (2) prepare a detailed South Korea-wide infrastructure master plan for the changing infrastructure for U.S. military facilities in South Korea, updating it periodically as needed, and identifying funding requirements and division of funding responsibilities between the United States and South Korea. The Deputy Assistant Secretary of Defense for Asian and Pacific Affairs provided written comments to a draft of this report. DOD agreed with our recommendations and pointed out that it is taking actions that address our recommendations. In commenting on our recommendation to reassess planned construction projects in South Korea, DOD stated that U.S. Forces Korea is already reassessing all planned construction in South Korea and will ensure that all planned construction projects support decisions regarding global presence and basing strategy. In commenting on our recommendation for a detailed South Korea-wide infrastructure master plan, DOD stated that U.S. Forces Korea is already developing master plans for all enduring installations and, once decisions have been reached on global presence and basing strategy, they will ensure that all master plans are adjusted to support these decisions. DOD’s comments are reprinted in appendix IV. DOD also provided a separate technical comment, and we revised the report to reflect it. We are sending copies of this report to the appropriate congressional committees, the Commander, U.S. Forces Korea, and the Director, Office of Management and Budget. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-5581. Key contributors to this report were Ron Berteotti, Roger Tomlinson, Nelsie Alcoser, Susan Woodward, and Ken Patton. To determine the scope and cost of the plan in relation to total infrastructure issues in South Korea, we analyzed provisions of the Land Partnership Plan (LPP), identified the scope and cost of construction projects outside of the LPP, compared the scope and cost of LPP construction projects to the scope and cost of all construction projects in South Korea, and analyzed some of the key unresolved infrastructure issues not included in the plan, such as the relocation of U.S. troops from Yongsan Army Garrison. We met with officials from the Joint Chiefs of Staff (Logistics Directorate and Strategy Division); Under Secretary of Defense for Policy (Office of Asia-Pacific Affairs); Deputy Under Secretary of Defense (Installations and Environment); U.S. Pacific Command, Headquarters Pacific Air Forces, U.S. Army Pacific, Marine Forces Pacific, U.S. Pacific Fleet; U.S. Forces Korea, Eighth U.S. Army and 7th Air Force; U.S. Department of State; U.S. Embassy (South Korea); and South Korea’s Defense Ministry to document their input to the plan. We visited 16 U.S. military installations and facilities in South Korea that are affected by the plan. We selected these installations and facilities because they provided a cross-section of the activities that are covered by the plan (i.e., some that will be closed, some that will be scaled back, some that will be expanded, some where new construction will take place, and some possible new installation locations). We also visited land transfer sites that remain unresolved and military construction projects that are not addressed in the plan to gain an understanding and perspective on the wide range of infrastructure issues affecting U.S. troops stationed in South Korea. To determine the implications of potential basing changes on the plan and other construction projects in South Korea, we obtained the views of officials from the Joint Chiefs of Staff (Logistics Directorate and Strategy Division); Under Secretary of Defense for Policy (Office of Asia-Pacific Affairs); and U.S. Forces Korea on the potential impact of changing defense policies. We conducted a literature review of U.S. and South Korean publications to collect information on the LPP and possible basing changes in South Korea. We also attended various congressional hearings, which discussed funding for U.S. Forces Korea construction projects and potential basing changes. We used this information to identify the costs of ongoing and planned construction associated with improving military infrastructure in areas where there is uncertainty about future U.S. presence—such as Yongsan Army Garrison and U.S. installations located north of Seoul. We did not verify the accuracy and completeness of this information. To identify implementation challenges associated with the plan that could affect future U.S. military construction projects in South Korea, we met with officials from the above organizations and reviewed the Status of Forces Agreement, an agreement under Article IV of the Mutual Defense Treaty between South Korea and the United States, and other related agreements and defense guidance. We discussed challenges that must be addressed during implementation of the LPP and implementation issues associated with the plan that could affect future construction projects throughout South Korea. We performed our review from September 2002 through May 2003 in accordance with generally accepted government auditing standards. The Land Partnership Plan (LPP) provides a comprehensive plan for more efficient and effective stationing of U.S. Forces in South Korea. The LPP is intended to strengthen the South Korea-U.S. alliance, improve the readiness posture of combined forces, reduce the overall amount of land granted for U.S. Forces Korea use, and enhance public support for both the South Korean government and U.S. Forces Korea, while positioning U.S. forces to meet alliance security requirements well into the future. According to U.S. Forces Korea officials, LPP imperatives are as follows: The agreement should be based on readiness and security, not the amount of land involved. The agreement should be comprehensive, allowing for land issues that cannot be resolved independently to be resolved as part of a package and ensuring stationing decisions that fit into a comprehensive vision for the disposition of U.S. forces. When new land and facilities are ready for use, U.S. Forces Korea can release old land and facilities. U.S. Forces Korea needs all existing facilities and areas and can only return them when replacement facilities are available or the requirement is met in another manner. The agreement should be binding under the Status of Forces Agreement. The LPP is not just an “agreement in principle” but also a commitment to take action, and it operates within the Status of Forces Agreement—which means there are no new rules. The agreement should be self-financing—the costs of the LPP must be shared between the United States and South Korea. U.S. funding is provided from the military construction account. The South Korean government provides host nation funds and funding obtained from sales of property returned to South Korea by the United States. As a general rule, the United States funds the relocation of units from camps the United States wishes to close, and South Korea funds the relocation of units from camps that South Korea has asked the United States to close. The execution of the LPP is shown in figure 1. The LPP has been negotiated under the authority of the Joint Committee under the Status of Forces Agreement. The Status of Forces Agreement gives the Joint Committee the authority and responsibility to determine the facilities and areas required for U.S. use in support of the United States/South Korea Mutual Defense Treaty. The Joint Committee established the Ad-hoc Subcommittee for LPP to develop and manage the LPP. The LPP components address installations, training areas, and safety easements. Installations: The LPP reduces the number of U.S. installations from 41 to 23 and consolidates U.S. forces onto enduring installations. The LPP establishes a timeline for the grant of new land, the construction of new facilities, and the closure of installations. Figure 8 illustrates the sequence in which new lands are to be granted to the United States and their relationship to facilities that will be returned to South Korea from calendar years 2002 through 2011. Training Areas: The LPP returns U.S. training areas in exchange for guaranteed time on South Korean ranges and training areas. To ensure the continued readiness of U.S. Forces Korea, the United States agrees to return certain granted facilities and areas and to accept the grant of joint use of certain South Korea military facilities and areas on a limited time-share basis as determined by the Status of Forces Agreement Joint Committee. The United States is expected to return approximately 32,186 acres, or 39,396,618 pyong, of granted training areas. Table 1 shows the exclusive use of existing grants retained by U.S. Forces Korea. Table 2 shows training areas that will be provided on a temporary basis to U.S. Forces Korea. Table 3 shows new safety easements to be designated for training areas. Table 4 shows training areas that will be returned to South Korea under the LPP. Table 5 shows training areas where parts of the land will be returned to South Korea. Table 6 shows training facilities and areas that the South Korean government is expected to grant to the U.S. for joint use for the time specified. Safety Easements: According to U.S. Forces Korea officials, a safety easement is a defined distance from an explosive area that personnel and structures must be kept away from and is directly related to the quantity and types of explosives and ammunition present. The presence of Korean citizens in areas requiring explosive safety easements has placed them at risk of injury or death in the event of an explosion. Tables 7, 8, and 9 show the various tiers of easements established under the LPP at U.S. military installations. Upper tier easements are those required at enduring installations; middle tier easements are required during armistice, but will not be required after a change in the armistice condition; and lower tier easements are those required at closing installations. U.S. Forces Korea shall enforce safety easements inside U.S. installations, while South Korea will enforce safety easements outside U.S. installations.
The U.S.-South Korean Land Partnership Plan (LPP), signed in March 2002, was designed to consolidate U.S. installations, improve combat readiness, enhance public safety, and strengthen the U.S.-South Korean alliance by addressing some of the causes of periodic tension associated with the U.S. presence in South Korea. The Senate report on military construction appropriations for fiscal year 2003 directed GAO to review the LPP. GAO adjusted its review to also address the effect of ongoing reassessments of U.S. overseas presence upon the LPP and other infrastructure needs. In this report, GAO assessed (1) the scope of the LPP, (2) the implications on the LPP and other construction projects of proposals to change basing in South Korea, and (3) implementation challenges associated with the LPP that could affect future U.S. military construction projects in South Korea. Although broad in scope, the LPP was not designed to resolve all U.S. military infrastructure issues. Specifically, the plan was intended to resolve 49 of the 89 separate land disputes that were pending in South Korea. Of the land disputes the plan did not address, the most politically significant, complex, and expensive dispute involves the potential relocation of U.S. forces from Yongsan Army Garrison, located in the Seoul metropolitan area. As a result, the LPP, as approved, covered about 37 percent of the $5.6 billion in construction costs planned at U.S. military installations in South Korea over the next 10 years. Ongoing reassessments of U.S. overseas presence and basing requirements could diminish the need for and alter the locations of many construction projects in South Korea, both those associated with the LPP and those unrelated to it. For example, over $1 billion of ongoing and planned construction associated with improving military infrastructure at Yongsan Army Garrison and U.S. installations located north of Seoul--areas where there is uncertainty about future U.S. presence--has recently been put on hold, canceled, or redirected to an installation located south of Seoul. GAO identified some key challenges that could adversely affect the implementation of the LPP and future U.S. military construction projects throughout South Korea. First, the plan relies on various funding sources, including funding realized through land sales from property returned by the United States. The extent to which these sources of funding would be required and available for broader infrastructure changes is not yet clear. Second, a master plan would be needed to guide future military construction to reposition U.S. forces and basing in South Korea.
Our objectives were to determine whether participating judges’ contributions for the 3 plan years ending on September 30, 2007, funded at least 50 percent of the JSAS costs and, if not, what adjustments in the contribution rates would be needed to achieve the 50 percent ratio. To satisfy our objectives, we used the normal cost rates determined by actuarial valuations of the system for each of the 3 fiscal years. We also examined participants’ contributions, the federal government’s contribution, and other relevant information in each plan years’ JSAS actuarial valuation report. An independent accounting firm hired by the Administrative Office of the United States Courts (AOUSC) audited the JSAS financial and actuarial information included in the JSAS actuarial valuation reports, with input from the plan’s actuary regarding relevant data, such as the actuarial present value of accumulated plan benefits. The plan’s actuary certified those amounts that are included in the JSAS actuarial valuation reports. We discussed the contents of the JSAS actuarial valuation reports with officials from AOUSC for the 3 plan years (2005 through 2007). In addition, we discussed with the plan’s actuary the actuarial assumptions made to project future benefits of the plan. We noted that the JSAS actuarial valuation for plan years 2005 through 2007 used a 0.0 percent salary increase per year, above inflation, in contrast to the September 30, 2007, Civil Service Retirement and Disability System valuation which used a 0.75 percent salary increase per year, above inflation. We determined that the use of 0.0 percent salary increase for the JSAS is reasonable, and consistent with a recent trend analysis we performed on judicial pay plans. We also reviewed the qualifications of the plan’s actuary who prepared the JSAS actuarial valuation reports for plan years 2005 to 2007 and nothing came to our attention that would lead us to question the qualifications of the actuary. We did not independently audit the JSAS actuarial valuation reports or the actuarially calculated cost figures. We conducted this performance audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We performed our review in Washington, D.C., from June 2008 through August 2008. We made a draft of this report available to the Director of AOUSC for review and comment. Depending on the circumstances, judicial participants may be eligible for some combination of five retirement plans, including the Civil Service Retirement System (CSRS) or the Federal Employees’ Retirement System (FERS). Three other separate retirement plans, described in appendix I, apply to various groups of judges in the federal judiciary, with JSAS being available to participants in all three retirement plans to provide annuities to their surviving spouses and children. JSAS was created in 1956 to help provide financial security for the families of deceased federal judges. It provides benefits to surviving eligible spouses and dependent children of judges who participate in the plan. Judges may elect coverage within 6 months of taking office, 6 months after getting married, if they were not married when they took office, 6 months after being elevated to a higher court, or during an open season authorized by statute. Active and senior judges currently contribute 2.2 percent of their salaries to JSAS, and retired judges contribute 3.5 percent of their retirement salaries to JSAS. Upon a judge’s death, the surviving spouse is to receive an annual annuity that equals 1.5 percent of the judge’s average annual salary during the 3 highest consecutive paid years (commonly known as the high-3) times the judge’s years of creditable service. The annuity may not exceed 50 percent of the high-3 and is guaranteed to be no less than 25 percent. Separately, an unmarried dependent child under age 18, or 22 if a full-time student, receives a survivor annuity that is equal to 10 percent of the judge’s high-3 or 20 percent of the judges’ high-3 divided by the number of eligible children, whichever is smaller. JSAS annuitants receive an annual adjustment in their annuities at the same time, and by the same percentage, as any cost-of-living adjustment (COLA) received by CSRS annuitants. Spouses and children are also eligible for Social Security survivor benefits. Since its inception in 1956, JSAS has been amended several times. Because of concern that too few judges were participating in the plan, Congress made broad reforms effective in 1986 with the Judicial Improvements Act of 1985. The 1985 act (1) increased the annuity formula for surviving spouses from 1.25 percent to the current 1.5 percent of the high-3 for each year of creditable service and (2) changed the provisions for surviving child benefits to relate benefit amounts to judges’ high-3 rather than the specific dollar amounts provided in 1976 by the Judicial Survivors’ Annuities Reform Act. In recognition of the significant benefit improvements that were made, the 1985 act increased the amounts that judges were required to contribute from 4.5 percent to 5 percent of their salaries, including retirement salaries. The 1985 act also changed the requirements for government contributions to the plan. Under the 1976 Judicial Survivors’ Annuities Reform Act, the government matched the judges’ contributions of 4.5 percent of salaries and retirement salaries. The 1985 act modified this by specifying that the government would contribute the amounts necessary to fund any remaining cost over the future lifetime of current participants. That amount is limited to 9 percent of total covered salary each year. In response to concerns that required contributions of 5 percent may have created a disincentive to participate, Congress enacted the Federal Courts Administration Act of 1992. Under this act, participants’ contribution requirements were reduced to 2.2 percent of salaries for active and senior judges and 3.5 percent of retirement salaries for retired judges. The 1992 act also significantly increased benefits for survivors of retired judges. This increase was accomplished by including years spent in retirement in the calculation of creditable service and the high-3 salary averages. Additionally, the 1992 act allowed judges to stop contributing to the plan if they ceased to be married and granted benefits to survivors of any judge who died in the interim between leaving office and the commencement of a deferred annuity. As of September 30, 2007, there were 1,303 active and senior judges, 223 retired judges, and 333 survivor annuitants covered under JSAS, according to the JSAS actuarial valuation report for plan year 2007. JSAS is financed by judges’ contributions and direct appropriations in an amount estimated to be sufficient to fund the future benefits paid to survivors of current and deceased participants. The plan’s actuary, using the plan’s funding method—in this case, the aggregate cost method— determines the plan’s normal cost rate and the normal costs for each plan year. The normal cost rate is the level percentage of future salaries that will be sufficient, along with investment earnings and the plan’s assets, to pay the plan’s benefits for current participants and beneficiaries. Normal cost calculations are estimates and require that many actuarial assumptions be made about the future, including, but not limited to mortality rates, turnover rates, and returns on investment, salary increases, and COLA increases over the life spans of current participants and beneficiaries. There are many acceptable actuarial methods for calculating normal cost. Regardless of which cost method is chosen, the expected total long-term cost of the plan should be the same; however, year-to-year costs may differ, depending on the cost method used. The expected annual federal, actuarially recommended contribution is the product of the federal government’s contribution rate and the participating judges’ salaries. However, the actual federal government contribution is approved through annual appropriations which have varied, both above and below the actuarially recommended amount. To determine the actuarially recommended annual contribution of the federal government, AOUSC, which is responsible for the administration of the JSAS, engages an enrolled actuary to perform the calculation of funding needed based on the difference between the present value of the expected future benefit payments to participants and the present value of net assets in the plan. Appendix II provides more details on the methodology used to determine the federal government’s contribution rate and lump sum payments. For JSAS plan years 2005 through 2007, the participating judges contributed, on average, about 54 percent of the plan’s costs. In plan years 2005 and 2006, participating judges paid slightly more than 61 percent and 50 percent of JSAS normal costs, respectively, and in plan year 2007, they paid slightly less than 50 percent of JSAS normal costs. Table 1 shows the judges’ and the federal government’s contribution rates and shares of JSAS’ normal costs (using the aggregate cost method, which is discussed in appendix II) for the period covered in our review. The judges’ and the federal government’s contribution rates for each of the 3 years shown in Table 1 were based on the actuarial valuations that occurred at the end of the prior year. For example, the judges’ contribution rate of 2.32 percent and the federal government’s contribution rate of 1.48 in plan year 2005 were based on the September 30, 2004, valuation contained in the plan year 2005 JSAS report. The total normal costs expressed as a percentage of the present value of participant’s future salaries shown in table 1 increased from 3.8 percent in plan year 2005 to 5.13 percent in plan year 2007. The judges’ share of the JSAS normal costs decreased from approximately 61 percent in plan year 2005, to approximately 50 percent in plan years 2006 and 2007. The federal government’s share of JSAS normal costs increased, from approximately 39 percent in plan year 2005, to approximately 50 percent in plan years 2006 and 2007. During those same years, the government’s contribution rates increased from 1.48 percent of salaries in plan year 2005 to 2.5 percent of salaries in plan year 2006, and then to 2.59 percent in plan year 2007. The increase in the federal government’s contribution rates was a result of the increase in normal costs resulting from several combined factors, such as changes in actuarial assumptions; lower-than-expected investment experience on plan assets; demographic changes—retirement, death, disability, new members, pay increases; as well as an increase in plan benefit obligations. However, the majority of the increase in the federal government’s contribution rate is because of changes in actuarial assumptions and a lesser degree the government’s contributing less than the actuarially recommended amounts, in plan years 2005, 2006, and 2007. Based on our review of the judges’ contribution rates for the JSAS, we determined that there was no need for any adjustments in the judges’ contribution rate. JSAS actuarial reports for the 3 years under review show that participating judges’ contributed at least 50 percent of JSAS normal costs as required by the Federal Courts Administration Act for plan years 2005 and 2006, and slightly below half for plan year 2007. As shown in Table 1 above, the judges’ average contribution for JSAS normal costs for this review period was approximately 54 percent, which exceeded the 50 percent contribution goal for judges. Table 2 provides a summary of the percentage share of contribution for judges and the federal government over the past 9 years. As shown above, the judges’ contribution share, in any given year, may vary from the 50 percent contribution goal, either exceeding or not meeting this goal. The judges’ average contribution share for the 9-year period was approximately 60 percent. Therefore, there is no reason to modify the judges’ contribution rates at this time. We requested comments on a draft of this report from the Director of AOUSC or his designee. In a letter dated September 10, 2008, the Director provided written comments on the report, which we have reprinted in appendix III. AOUSC also provided technical comments, which we have incorporated as appropriate. In its comments, AOUSC stated that our report showed that for a third consecutive triennial cycle, judges have paid a greater share of the cost of this system. AOUSC stated that our report showed that over the past 9 years, judges’ contributions have funded approximately 60 percent of the costs of JSAS. In AOUSC’s view, we did not present in our report the downward adjustment that would be needed to the participating judges’ contribution rates to attain the 50 percent level, and this omission is not consistent with Congress’s intent in enacting the Federal Courts Administration Act of 1992. We disagree with AOUSC’s view as to the purpose of section 201(i), of the Act. Since enactment, we have interpreted this section as providing a minimum percentage of the costs of the program to be borne by its participants because the statute requires us to recommend adjustments when the judges’ contributions have not achieved 50 percent of the costs of the fund. We do not view the section as calling for parity between the participants and the federal government with respect to funding the program. For the 3-years covered by this review, we determined and reported that judges’ contributions represented approximately 54 percent of the normal costs of JSAS, and therefore, an adjustment to the judges’ contribution rates was not needed under the existing legislation because the judges’ contribution achieved 50 percent of JSAS costs. We have consistently applied this interpretation of the Act’s requirements in all of our previously mandated reviews. However, if one were to interpret the Act as calling for an equal sharing of the program’s cost between participants and the government, then, on the basis of the information contained in the JSAS actuarial reports over the last 9 years, participating judges’ future contributions would have to decrease a total of 0.32 percentage points below the current 2.2 percent of salaries for active judges and senior judges and 3.5 percent for retired judges in order to fund 50 percent of JSAS costs over the last 9 years. If the decrease were distributed equally among the judges, those currently contributing 2.2 percent of salaries would have to contribute 1.88 percent, and those currently contributing 3.5 percent of retirement salaries would have to contribute 3.18 percent. We have not declined to include downward adjustment information, as AOUSC states, but we are not recommending such an adjustment because of our interpretation of the statute’s requirements. We are sending copies of this report to interested congressional committees and the Director of AOUSC. Copies of this report will be made available to others upon request. This report is also available at no charge on the GAO Web site at http://www.gao.gov. Please contact Steven J. Sebastian at (202) 512-3406 [email protected], or Joseph A. Applebaum at (202) 512-6336 [email protected], if you or your staff have any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report were Julie Phillips, Assistant Director; Jehan Abdel-Gawad; and Kwabena Ansong. The Administrative Office of the United States Courts (AOUSC) administers three retirement plans for judges in the federal judiciary. The Judicial Retirement System automatically covers United States Supreme Court justices; federal circuit and district court judges; and territorial district court judges; and is available, at their option, to the Administrative Assistant to the Chief Justice; the Director of AOUSC; and the Director of the Federal Judicial Center. The Judicial Officers’ Retirement Fund is available to bankruptcy and full-time magistrate judges. The United States Court of Federal Claims Judges’ Retirement System is available to the United States Court of Federal Claims judges. Also, judges who are not automatically covered under the Judicial Retirement System may opt to participate in the Federal Employees’ Retirement System (FERS) or elect to participate in the Judicial Retirement System for bankruptcy judges, magistrate judges, or United States Court of Federal Claims judges. Judges who retire under the judicial retirement plans generally continue to receive the full salary amounts that were paid immediately before retirement, assuming the judges met the age and service requirements. Retired territorial district court judges generally receive the same cost-of- living adjustment that Civil Service Retirement System retirees receive, except that their annuities cannot exceed 95 percent of an active district court judge’s salary. United States Court of Federal Claims judge retirees continue to receive the same salary payable to active United States Court of Federal Claims judges. Those in the Judicial Retirement System and the United States Court of Federal Claims Judges’ Retirement System are eligible to retire when the number of years of service and the judge’s age total at least 80, with a minimum retirement age of 65, and service ranging from 10 to 15 years. Those in the Judicial Officers’ Retirement Fund are eligible to retire at age 65 with at least 14 years of service or may retire at age 65 with 8 years of service, on a less than full salary retirement. Participants in all three judicial retirement plans are required to contribute to and receive Social Security benefits. Aggregate funding method. This method, as used by the Judicial Survivors’ Annuities System (JSAS) plan, defines the normal cost rate as the level percentage of future salaries that will be sufficient, along with investment earnings and the plan’s assets, to pay the plan’s benefits for current participants and beneficiaries. The following discussion is intended to illustrate the use of the aggregate funding method. For plan year 2007, the JSAS’s actuary estimated the present value of future benefits for participating judges and beneficiaries was $649,628,473 and the JSAS had assets amounting to $491,788,627. The difference between these amounts, $157,839,846, must be financed through future contributions to be paid by the participating judges and the federal government. Using the same assumptions as used to estimate the present value of future benefits, the actuary estimated the present value of participating judges’ future salaries to be $3,078,464,410 so that the amount to be financed represented 5.13% ($157,839,846 divided by $3,078,464,410) of the future participating judges’ salaries. This percentage is the JSAS’s normal cost rate. If all the actuarial assumptions proved exactly correct, then a total contribution of 5.13% of the participating judges’ salaries annually would make up the difference between the JSAS’s future payments and its assets (the $157,839,846 mentioned above). The JSAS’s actuary also estimated the present value of participating judges’ future contributions to be $78,123,909. Thus the federal government’s share for plan year 2007 is the difference between $157,839,846 and $78,123,909, or $79,715,937. Federal government’s actuarially recommended contribution rate. The federal government’s actuarially recommended contribution rate is equal to the federal government’s share of future financing ($79,715,937) divided by the present value of the participating judges’ future salaries ($3,078,464,410). For the plan year 2007 the rate was 2.59% ($79,715,937 divided by $3,078,464,410). Thus, the actuarially recommended federal contribution is the product of the federal government’s actuarially recommended contribution rate and the participating judges’ salaries. The federal government’s contribution is approved through an annual appropriation. It has varied, both above and below the actuarially recommended amount. Lump sum payout. Under JSAS, a lump sum payout may occur upon the dissolution of marriage either through divorce or death of spouse. Payroll contributions cease, but previous contributions remain in JSAS. Also, if there is no eligible surviving spouse or child upon the death of a participating judge, the lump sum payout to the judge’s designated beneficiaries is computed as follows: Lump sum payout equals the total amount paid into the plan by the judge plus 3 percent annual interest accrued, less 2.2 percent of salaries for each participating year (forfeited amount). In effect, the interest plus any amount contributed in excess of 2.2 percent of judges’ salaries will be refunded.
The Judicial Survivors' Annuities System (JSAS) was created in 1956 to provide financial security for the families of deceased federal judges. It provides benefits to eligible spouses and dependent children of judges who elect coverage within 6 months of taking office, 6 months after getting married, 6 months after being elevated to a higher court, or during an open season authorized by statute. Active and senior judges currently contribute 2.2 percent of their salaries to JSAS, and retired judges contribute 3.5 percent of their retirement salaries to JSAS. Pursuant to the Federal Courts Administration Act of 1992 (Pub. L. No. 102-572), GAO is required to review JSAS costs every 3 years and determine whether the judges' contributions fund at least 50 percent of the plan's costs during the 3-year period. If the contributions fund less than 50 percent of these costs, GAO is to determine what adjustments to the contribution rates would be needed to achieve the 50 percent ratio. For the 2005 to 2007 time frame covered by this review, the participating judges funded approximately 54 percent of JSAS costs, and the federal government funded 46 percent. The increase in the government's contribution rate over the 3-year period was a result of increases in costs. The increase in costs reflected the combined effects of changes in actuarial assumptions; lower-than-expected rates of return on plan assets; demographic changes such as retirement, death, disability, new members, and pay increases; as well as an increase in plan benefit obligations. GAO determined that an adjustment to the judges' contribution rate was not needed because their average contribution share for the 3-year period exceeded the 50 percent minimum contribution goal specified by law. GAO examined the annual share of normal costs covered by judges' contributions over a 9-year period and found that, on average, the participating judges funded approximately 60 percent of JSAS's costs.
In August 1990, Iraq invaded Kuwait, and the United Nations imposed sanctions against Iraq. Security Council resolution 661 of 1990 prohibited all nations from buying and selling Iraqi commodities, except for food and medicine. Security Council resolution 661 also prohibited all nations from exporting weapons or military equipment to Iraq and established a sanctions committee to monitor compliance and progress in implementing the sanctions. The members of the sanctions committee were members of the Security Council. Subsequent Security Council resolutions specifically prohibited nations from exporting to Iraq items that could be used to build chemical, biological, or nuclear weapons. In 1991, the Security Council offered to let Iraq sell oil under a U.N. program to meet its peoples’ basic needs. The Iraqi government rejected the offer, and over the next 5 years, the United Nations reported food shortages and a general deterioration in social services. In December 1996, the United Nations and Iraq agreed on the Oil for Food program, which permitted Iraq to sell up to $1 billion worth of oil every 90 days to pay for food, medicine, and humanitarian goods. Subsequent U.N. resolutions increased the amount of oil that could be sold and expanded the humanitarian goods that could be imported. In 1999, the Security Council removed all restrictions on the amount of oil Iraq could sell to purchase civilian goods. The United Nations and the Security Council monitored and screened contracts that the Iraqi government signed with commodity suppliers and oil purchasers, and Iraq’s oil revenue was placed in a U.N.-controlled escrow account. In May 2003, U.N. resolution 1483 requested the U.N. Secretary General to transfer the Oil for Food program to the CPA by November 2003. (Appendix II contains a detailed chronology of Oil for Food program and sanctions events.) The United Nations allocated 59 percent of the oil revenue for the 15 central and southern governorates, which were controlled by the central government; 13 percent for the 3 northern Kurdish governorates; 25 percent for a war reparations fund for victims of the Iraq invasion of Kuwait in 1990; and 3 percent for U.N. administrative costs, including the costs of weapons inspectors. From 1997 to 2002, the Oil for Food program was responsible for more than $67 billion of Iraq's oil revenue. Through a large portion of this revenue, the United Nations provided food, medicine, and services to 24 million people and helped the Iraqi government supply goods to 24 economic sectors. Despite concerns that sanctions may have worsened the humanitarian situation, the Oil for Food program appears to have helped the Iraqi people. According to the United Nations, the average daily food intake increased from around 1,275 calories per person per day in 1996 to about 2,229 calories at the end of 2001. Malnutrition rates for children under 5 fell by more than half. In February 2002, the United Nations reported that the Oil for Food program had considerable success in several sectors such as agriculture, food, health, and nutrition by arresting the decline in living conditions and improving the nutritional status of the average Iraqi citizen. From 1997 through 2002, we estimate that the former Iraqi regime acquired $10.1 billion in illegal revenues—$5.7 billion in oil smuggled out of Iraq and $4.4 billion in surcharges on oil sales and illicit charges from suppliers exporting goods to Iraq through the Oil for Food program. The United Nations, through OIP and the Security Council’s Iraq sanctions committee, was responsible for overseeing the Oil for Food program. However, the Security Council allowed the Iraqi government, as a sovereign entity, to negotiate contracts directly with purchasers of Iraqi oil and suppliers of commodities. This structure, in addition to the uncertain oversight roles of OIP and the sanctions committee, was an important factor in enabling Iraq to levy illegal surcharges and illicit commissions. U.N. external audit reports contained no findings of program fraud. Summaries of internal audit reports provided to GAO pointed to some operational concerns in procurement, coordination, monitoring, and oversight. We estimate that, from 1997 through 2002, the former Iraqi regime acquired $10.1 billion in illegal revenues—$5.7 billion through oil smuggled out of Iraq and $4.4 billion through surcharges against oil sales and illicit commissions from commodity suppliers. This estimate is higher than the $6.6 billion in illegal revenues we reported in May 2002. We updated our estimate to include (1) oil revenue and contract amounts for 2002, (2) updated letters of credit from prior years, and (3) newer estimates of illicit commissions from commodity suppliers. Appendix I describes our methodology for determining illegal revenues gained by the former Iraqi regime. Oil was smuggled out through several routes, according to U.S. government officials and oil industry experts. Oil entered Syria by pipeline, crossed the borders of Jordan and Turkey by truck, and was smuggled through the Persian Gulf by ship. Jordan maintained trade protocols with Iraq that allowed it to purchase heavily discounted oil in exchange for up to $300 million in Jordanian goods. Syria received up to 200,000 barrels of Iraqi oil a day in violation of the sanctions. Oil smuggling also occurred through Turkey and Iran. In addition to revenues from oil smuggling, the Iraqi government levied surcharges against oil purchasers and commissions against commodity suppliers participating in the Oil for Food program. According to some Security Council members, the surcharge was up to 50 cents per barrel of oil and the commission was 5 to 15 percent of the commodity contract. In our 2002 report, we estimated that the Iraqi regime received a 5-percent illicit commission on commodity contracts. However, a September 2003 Department of Defense review found that at least 48 percent of 759 Oil for Food contracts that it reviewed were potentially overpriced by an average of 21 percent. Food commodity contracts were the most consistently overpriced, with potential overpricing identified in 87 percent of the contracts by an average of 22 percent. The review also found that the use of middlemen companies potentially increased contract prices by 20 percent or more. Defense officials found 5 contracts that included “after- sales service charges” of between 10 and 20 percent. In addition, interviews by U.S. investigators with high-ranking Iraqi regime officials, including the former oil and finance ministers, confirmed that the former regime received a 10-percent commission from commodity suppliers. According to the former oil minister, the regime instituted a fixed 10-percent commission in early 2001 to address a prior “compliance” problem with junior officials. These junior officials had been reporting lower commissions than what they had negotiated with suppliers and pocketing the difference. Both OIP, as an office within the U.N. Secretariat, and the Security Council’s sanctions committee were responsible for overseeing the Oil for Food Program. However, the Iraqi government negotiated contracts directly with purchasers of Iraqi oil and suppliers of commodities. While OIP was to examine each contract for price and value, it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. The sanctions committee responded to illegal surcharges on oil purchases, but it is unclear what actions it took to respond to commissions on commodity contracts. U.N. Security Council resolutions and procedures recognized the sovereignty of Iraq and gave the Iraqi government authority to negotiate contracts and decide on contractors. Security Council resolution 986 of 1995 authorized states to import petroleum products from Iraq, subject to the Iraqi government’s endorsement of transactions. Resolution 986 also stated that each export of goods would be at the request of the government of Iraq. Security Council procedures for implementing resolution 986 further stated that the Iraqi government or the United Nations Inter-Agency Humanitarian Program would contract directly with suppliers and conclude the appropriate contractual arrangements. Iraqi control over contract negotiations was an important factor in allowing Iraq to levy illegal surcharges and illicit commissions. When the United Nations first proposed the Oil for Food program in 1991, it recognized this vulnerability. At that time, the Secretary General proposed that the United Nations, an independent agent, or the government of Iraq be given the responsibility to negotiate contracts with oil purchasers and commodity suppliers. The Secretary General concluded that it would be highly unusual or impractical for the United Nations or an independent agent to trade Iraq’s oil or purchase commodities. He recommended that Iraq negotiate the contracts and select the contractors. However, he stated that the United Nations and Security Council would have to ensure that Iraq’s contracting did not circumvent the sanctions and was not fraudulent. The Security Council further proposed that U.N. agents review contracts and compliance at Iraq’s oil ministry, but Iraq refused these conditions. Iraqi government control over contracts applied to oil purchases, all commodities purchased for the 15 central and southern governorates, and food and medical supplies purchased in bulk by the central government for the three autonomous Kurdish governorates in the north. The rest of the program in the north was run by nine specialized U.N. agencies and included activities such as distributing food rations and constructing or rehabilitating schools, health clinics, power generation facilities, and houses. OIP administered the Oil for Food program from December 1996 to November 2003. Under Security Council resolution 986 of 1995 and a memorandum of understanding between the United Nations and the Iraqi government, OIP monitored the sale of Iraq’s oil and its purchase of commodities and the delivery of goods, and accounted for the program’s finances. The United Nations received 3 percent of Iraq’s oil export proceeds for its administrative and operational costs, which included the cost of U.N. weapons inspections. The sanctions committee’s procedures for implementing resolution 986 stated that independent U.N. inspection agents were responsible for monitoring the quality and quantity of the oil shipped. The agents were authorized to stop shipments if they found irregularities. OIP hired a private firm to monitor Iraqi oil sales at exit points. However, the monitoring measures contained weaknesses. According to U.N. reports and a statement from the monitoring firm, the major offshore terminal at Mina al-Basra did not have a meter to measure the oil pumped nor could onshore storage capacity be measured. Therefore, the U.N. monitors could not confirm the volume of oil loaded onto vessels. Also, in 2001, the oil tanker Essex took a large quantity of unauthorized oil from the platform when the monitors were off duty. In December 2001, the Security Council required OIP to improve the monitoring at the offshore terminal. It is unclear what actions OIP took. As part of its strategy to repair Iraq’s oil infrastructure, the CPA had planned to install reliable metering at Mina al- Basra and other terminals, but no contracts have been let. OIP also was responsible for monitoring Iraq’s purchase of commodities and the delivery of goods. Security Council resolution 986, paragraph 8a(ii) required Iraq to submit a plan, approved by the Secretary General, to ensure equitable distribution of Iraq’s commodity purchases. The initial distribution plans focused on food and medicines while subsequent plans were expansive and covered 24 economic sectors, including electricity, oil, and telecommunications. The sanctions committee’s procedures for implementing Security Council resolution 986 stated that experts in the Secretariat were to examine each proposed Iraqi commodity contract, in particular the details of price and value, and to determine whether the contract items were on the distribution plan. OIP officials told the Defense Contract Audit Agency they performed very limited, if any, pricing review. They stated that no U.N. resolution tasked them with assessing the price reasonableness of the contracts and no contracts were rejected solely on the basis of price. However, OIP officials stated that, in a number of instances, they reported to the sanctions committee that commodity prices appeared high, but the committee did not cite pricing as a reason to place holds on the contracts. For example, in October 2001, OIP experts reported to the sanctions committee that the prices in a proposed contract between Iraq and the Al- Wasel and Babel Trading Company appeared high. However, the sanctions committee reviewed the data and approved the contract. In April 2004, the Treasury Department identified this company as a front company for the former regime. The United Nations also required all countries to freeze the assets of this company and transfer them to the Development Fund for Iraq in accordance with Security Council resolution 1483. The sanctions committee’s procedures for implementing resolution 986 stated that independent inspection agents will confirm the arrival of supplies in Iraq. OIP deployed about 78 U.N. contract monitors to verify shipments and authenticate the supplies for payment. OIP employees were able to visually inspect 7 to 10 percent of the approved deliveries. Security Council resolution 986 also requested the Secretary General to establish an escrow account for the Oil for Food program and to appoint independent and certified public accountants to audit the account. The Secretary General established an escrow account at BNP Paribas for the deposit of Iraqi oil revenues. The U.N. Board of Audit, a body of external public auditors, audited the account. The external audits focused on management issues related to the Oil for Food program and the financial condition of the Iraq account. U.N. auditors generally concluded that the Iraq account was fairly presented in accordance with U.N. financial standards. The reports stated that OIP was generally responsive to external audit recommendations. The external audits determined that oil prices were mostly in accordance with the fair market value of oil products to be shipped and checked to confirm that pricing was properly and consistently applied. They also determined that humanitarian and essential services supplies procured with oil funds generally met contract terms with some exceptions. U.N. external audit reports contained no findings of fraud during the program. The U.N. Office of Internal Oversight Services (OIOS) conducted internal audits of the Oil for Food program and reported the results to OIP’s executive director. OIOS officials stated that they have completed 55 audits and have 4 ongoing audits of the Oil for Food program. Overall, OIOS reported that OIP had made satisfactory progress in implementing most of its recommendations. We did not have access to individual OIOS audit reports except for an April 2003 report made publicly available in May 2004 that assessed the activities of the company contracted by the United Nations to authenticate goods coming into Iraq. It found that the contractor did not perform all required duties and did not adequately monitor goods coming into the northern areas of Iraq. We also reviewed 7 brief summaries of OIOS reports covering the Oil for Food program from July 1, 1996, through June 30, 2003. These summaries identified a variety of operational concerns involving procurement, inflated pricing and inventory controls, coordination, monitoring, and oversight. In one case, OIOS cited purchase prices for winter items for displaced persons in northern Iraq that were on average 61 percent higher than local vendor quotes obtained by OIOS. In another case, an OIOS review found that there was only limited coordination of program planning and insufficient review and independent assessment of project implementation activities. The sanctions committee was responsible for three key elements of the Oil for Food program: (1) monitoring implementation of the sanctions, (2) screening contracts to prevent the purchase of items that could have military uses, and (3) approving Iraq’s oil and commodity contracts. U.N. Security Council resolution 661 of 1990 directed all states to prevent Iraq from exporting products, including petroleum, into their territories. Paragraph 6 of resolution 661 established a sanctions committee to report to the Security Council on states’ compliance with the sanctions and to recommend actions regarding effective implementation. As early as June 1996, the Maritime Interception Force, a naval force of coalition partners including the United States and Great Britain, informed the sanctions committee that oil was being smuggled out of Iraq through Iranian territorial waters. In December 1996, Iran acknowledged the smuggling and reported that it had taken action. In October 1997, the sanctions committee was again informed about smuggling through Iranian waters. According to multiple sources, oil smuggling also occurred through Jordan, Turkey, Syria, and the Gulf. Smuggling was a major source of illicit revenue for the former Iraqi regime through 2002. A primary function of the sanctions committee was to review and approve contracts for items that could be used for military purposes. The United States conducted the most thorough review; about 60 U.S. government technical experts assessed each item in a contract to determine its potential military application. According to U.N. Secretariat data in 2002, the United States was responsible for about 90 percent of the holds placed on goods to be exported to Iraq. As of April 2002, about $5.1 billion worth of goods were being held for shipment to Iraq. According to OIP, no contracts were held solely on the basis of price. Under Security Council resolution 986 of 1995, and its implementing procedures, the sanctions committee was responsible for approving Iraq’s oil contracts, particularly to ensure that the contract price was fair, and for approving Iraq’s commodity contracts. The U.N.’s oil overseers reported in November 2000 that the oil prices proposed by Iraq appeared low and did not reflect the fair market value. According to a senior OIP official, the independent oil overseers also reported in December 2000 that purchasers of Iraqi oil had been asked to pay surcharges. In March 2001, the United States informed the sanctions committee about allegations that Iraqi government officials were receiving illegal surcharges on oil contracts and illicit commissions on commodity contracts. The sanctions committee attempted to address these allegations by implementing retroactive pricing for oil contracts in 2001. It is unclear what actions the sanctions committee took to respond to illicit commissions on commodity contracts. Due to increasing concern about the humanitarian situation in Iraq and pressure to expedite the review process, the Security Council passed resolution 1284 in December 1999 to direct the sanctions committee to accelerate the review process. Under fast-track procedures, the sanctions committee allowed OIP to approve contracts for food, medical supplies, and agricultural equipment (beginning in March 2000), water treatment and sanitation (August 2000), housing (February 2001), and electricity supplies (May 2001). A number of investigations and audits of the Oil for Food program are under way. These efforts may wish to further examine how the structure of the program enabled the Iraqi government to obtain illegal revenues, the role of member states in monitoring and enforcing the sanctions, actions taken to reduce oil smuggling, and the responsibilities and procedures for assessing price reasonableness in commodity contracts. Current or planned efforts include an inquiry initiated by the United Nations, an investigation and audit overseen by the Iraqi Board of Supreme Audit, and efforts undertaken by several U.S. congressional committees. Ongoing and planned investigations of the Oil for Food program provide an opportunity to better quantify the extent of corruption, determine the adequacy of internal controls, and identify ways to improve future humanitarian assistance programs conducted within an economic sanctions framework. Based on our work, we identified several areas that warrant further analysis. The scope of the Oil for Food program was extensive. The United Nations attempted to oversee a $67 billion program providing humanitarian and other assistance in 24 sectors to a country with 24 million people and borders 3,500 kilometers long. When the program was first proposed in 1991, the Secretary General considered having either the United Nations, an independent agent, or the Iraqi government negotiate oil and commodity contracts. The Secretary General concluded that the first two options were impractical and proposed that Iraq would negotiate the contracts and U.N. staff would work at Iraq’s oil ministry to ensure compliance. The final MOU between the Iraqi government and the United Nations granted control of contract negotiations to Iraq in recognition of its sovereignty. Investigations of the Oil for Food program should consider examining how the size and structure of the Oil for Food program enabled the Iraqi government to obtain illegal revenues through illicit surcharges and commissions. Under Security Council resolutions, all member states were responsible for enforcing the sanctions and the United Nations depended on states bordering Iraq to deter smuggling. National companies were required to register with their respective permanent missions to the United Nations prior to direct negotiations with the Iraqi government, but it is unclear what criteria the missions used to assess the qualifications of their companies. Issues that warrant further analysis include the role of member states in monitoring and enforcing the sanctions and the criteria countries used in registering national oil purchasers and commodity suppliers. Prior to the imposition of sanctions, Turkey was one of Iraq’s major trading partners. Total trade between the two countries was valued at $3 billion per year, and Turkey received about $1 billion each year by trucking goods to Iraq from Turkish ports. Jordan had also been a top trading partner; in 2001, it was the fifth largest exporter to Iraq and was the ninth largest importer of Iraqi commodities. Jordan and Iraq had annual trade protocols during the U.N. sanctions that allowed Iraq to sell heavily discounted oil to Jordan in exchange for up to $300 million in Jordanian goods. The sanctions committee noted the existence of the protocol but took no action. From November 2000 to March 2003, Iraq exported up to 200,000 barrels per day of oil through a Syrian pipeline in violation of UN sanctions. It is unclear what actions the sanctions committee or the United States took to stop the illegal exporting of Iraqi oil to Syria. Investigations should considering examining any actions that were taken to reduce Iraqi oil smuggling as well as the factors that may have precluded the sanctions committee from taking action. While sanctions committee procedures stated that the Secretariat was to examine each contract for price and value, OIP officials stated that no U.N. resolution tasked them with assessing the price reasonableness of the contracts. Although the sanctions committee was responsible for approving commodity contracts, it primarily screened contracts to prevent the purchases of items with potential military uses. In December 1999, U.N. Security Council resolution 1284 directed the sanctions committee to accelerate approval procedures for goods no longer subject to sanctions committee review, including food and equipment and supplies to support the health, agricultural, water treatment and sanitation, housing, and electricity sectors. It is unclear where the roles and responsibilities for assessing price reasonableness rested. Audits and other inquiries should determine which entities assessed the reasonableness of prices for commodity contracts that were negotiated between the Iraqi government and suppliers and what actions were taken on contracts with questionable pricing. These efforts should also examine how prices for commodities were assessed for reasonableness under fast-track procedures. Much of the information on surcharges on oil sales and illicit commissions on commodity contracts is with the ministries in Baghdad and national purchasers and suppliers. We did not have access to this data to verify the various allegations of corruption associated with these transactions. Subsequent investigations of the Oil for Food program should include a statistical sampling of these transactions to more accurately document the extent of corruption and the identities of companies and countries that engaged in illicit transactions. This information would provide a basis for restoring those assets to the Iraqi government. Subsequent evaluations and audits should also consider an analysis of the lessons learned from the Oil for Food program and how future humanitarian programs of this nature should be structured to ensure that funds are spent on intended beneficiaries and projects. For example, analysts may wish to review the codes of conduct developed for the CPA’s Oil for Food former coordination center and suppliers. In addition, U.N. specialized agencies implemented the program in the northern governorates while the program in central and southern Iraq was run by the central government in Baghdad. A comparison of these two approaches could provide insight on the extent to which the operations were transparent and the program delivered goods and services to the Iraqi people. The history of inadequate oversight and corruption in the Oil for Food program also raises questions about the Iraqi government’s ability to manage the import and distribution of Oil for Food commodities and the billions in international assistance expected to flow into the country. Iraqi ministries must address corruption in the Oil for Food program to help ensure that the remaining contracts are managed with transparent and accountable controls. Building these internal control and accountability measures into the operations of Iraqi ministries will also help safeguard the $18.4 billion in fiscal year 2004 U.S. reconstruction funds and the nearly $14 billion pledged by other countries. Several investigations into the Oil for Food program are under way. In April 2004, a U.N. inquiry was announced to examine allegations of corruption and misconduct within the United Nations Oil for Food program and its overall management of the humanitarian program. In addition, Iraq’s Board of Supreme Audit contracted with the accounting firm Ernst and Young to conduct an investigation of the program. Several U.S. congressional committees have also begun inquiries into U.N. management of the Oil for Food program and U.S. oversight through its role on the sanctions committee. The Independent Inquiry Committee, under the direction of former Federal Reserve Chairman Paul Volcker, began on April 21, 2004, with a U.N. Security Council resolution supporting the inquiry and the appointment of two additional high-level officials to oversee the investigation. On June 15, 2004, the Committee announced the appointment of its senior staff and the recruitment of additional staff, including attorneys, investigators, and accountants. The Committee plans to issue an interim report in the summer of 2004, followed by a final report in early 2005. According to the terms of reference, this investigation will collect and examine information relating to the administration and management of the Oil for Food program, including allegations of fraud and corruption on the part of U.N. staff and those entities that had contracts with the United Nations or the Iraqi government. The Committee intends to determine whether (1) procedures for processing and approving contracts, monitoring oil sales and deliveries, and purchasing and delivering humanitarian goods were violated; (2) U.N. officials, staff, or contractors engaged in illicit or corrupt activities; and (3) program accounts were maintained in accordance with U.N. financial regulations. The Independent Inquiry Committee, the Iraqi Board of Supreme Audit, and the CPA signed a memorandum of understanding to facilitate the Committee’s access to Oil for Food documents in Iraq. As part of its contract with the Iraqi Board of Supreme Audit to audit the Oil for Food program, the international accounting firm Ernst & Young is to identify and organize Iraqi records related to the Oil for Food program. In March 2004, the CPA authorized the Iraqi Board of Supreme Audit to conduct a full and independent audit, investigation, and accounting of the Oil for Food program and the disposition of Iraqi assets associated with the program. As of May 19, 2004, the CPA had authorized the expenditure of $20 million for this purpose, and the Board contracted with Ernst & Young to carry out the investigation. The Board is to release a final report to the interim Iraqi government and to the public with specific findings and recommendations. The CPA expected the report to address (1) the manner in which the program may or may not have been mismanaged, (2) the disposition of Iraqi contracts and assets on the program, (3) identification of individuals who may have benefited through improper disposition of program contracts and assets, (4) the current location and status of Iraqi assets that may have been diverted and recommendations on recovering these assets, and (5) possible criminal offenses. Several U.S. congressional committees and subcommittees are also in various stages of examining the Oil for Food program. In May 2004, the Senate Committee on Governmental Affairs, Permanent Subcommittee on Investigations, announced an investigation to examine allegations of improper conduct and whether such conduct may have negatively affected U.S. interests. The Subcommittee is particularly interested in the extent to which any misconduct took place within the United States and the involvement of U.S. citizens, residents, or businesses. In addition, the House International Relations Committee and the Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform, are investigating allegations of misconduct. Along with the Senate Permanent Subcommittee on Investigations and the Senate Committee on Foreign Relations, they have requested program documents from the State Department and United Nations. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have. For questions regarding this testimony, please call Joseph Christoff at (202) 512-8979. Other key contributors to this statement were Monica Brym, Tetsuo Miyabara, Audrey Solis, and Phillip Thomas. We used the following methodology to estimate the former Iraqi regime’s illicit revenues from oil smuggling, surcharges on oil, and commissions from commodity contracts from 1997 through 2002: To estimate the amount of oil the Iraqi regime smuggled, we used Energy Information Administration (EIA) estimates of Iraqi oil production and subtracted oil sold under the Oil for Food program and domestic consumption. The remaining oil was smuggled through Turkey, the Persian Gulf, Jordan, and Syria (oil smuggling to Syria began late 2000). We estimated the amount of oil to each destination based on information from and discussions with officials of EIA, Cambridge Energy Research Associates, the Middle East Economic Survey, and the private consulting firm Petroleum Finance. We used the price of oil sold to estimate the proceeds from smuggled oil. We discounted the price by 9 percent for the difference in quality. We discounted this price by 67 percent for smuggling to Jordan and by 33 percent for smuggling through Turkey, the Persian Gulf, and Syria. According to oil industry experts, this is representative of the prices paid for smuggled oil. To estimate the amount Iraq earned from surcharges on oil, we multiplied the barrels of oil sold under the Oil for Food program from 1997 through 2002 by 25 cents per barrel. According to Security Council members, the surcharge varied, but Iraq tried to get as much as 50 cents per barrel. Industry experts also stated the surcharge varied. To estimate the commission from commodities, we multiplied Iraq’s letters of credit for commodity purchases by 5 percent for 1997 through 1998 and 10 percent for 1999 through 2002. According to Security Council members, the commission varied from 5 percent to 10 percent. This percentage was also confirmed in interviews conducted by U.S. officials with former Iraqi regime ministers of oil, finance, and trade and with Sadaam Hussein’s presidential advisors. GAO did not obtain source documents and records from the former regime about its smuggling, surcharges, and commissions. Our estimate of illicit revenues is therefore not a precise accounting number. Areas of uncertainty in our estimate include: GAO’s estimate of the revenue from smuggled oil is less than the estimates of U.S. intelligence agencies. We used estimates of Iraqi oil production and domestic consumption for our calculations. U.S. intelligence agencies used other methods to estimate smuggling. GAO’s estimate of revenue from oil surcharges is based on a surcharge of 25 cents per barrel from 1997 through 2002. However, the average surcharge could be lower. U.N. Security Council members and oil industry sources do not know when the surcharge began or ended or the precise amount of the surcharge. One oil industry expert stated that the surcharge was imposed at the beginning of the program but that the amount varied. Security Council members and the U.S. Treasury Department reported that surcharges ranged from 10 cents to 50 cents per barrel. As a test of reasonableness, GAO compared the price paid for oil under the Oil for Food program with a proxy oil price for the period 1997 through 2002. We found that for the entire period, the price of Iraqi oil was considerably below the proxy price. Oil purchasers would have to pay below market price to have a margin to pay the surcharge. GAO’s estimate of the commission on commodities could be understated. We calculated commissions based on the commodity contracts for the 15 governorates in central and southern Iraq (known as the “59-percent account” because these governorates received this percentage of Oil for Food revenues). We excluded contracts for the three northern governorates (known as the “13-percent account”). However, the former Iraqi regime negotiated the food and medical contracts for the northern governorates, and the Defense Contract Audit Agency found that some of these contracts were potentially overpriced. The Defense Contract Audit Agency also found extra fees of between 10 and 20 percent on some contracts. Iraqi forces invaded Kuwait. Resolution 660 condemned the invasion and demands immediate withdrawal from Kuwait. Imposed economic sanctions against the Republic of Iraq. The resolution called for member states to prevent all commodity imports from Iraq and exports to Iraq, with the exception of supplies intended strictly for medical purposes and, in humanitarian circumstances, foodstuffs. President Bush ordered the deployment of thousands of U.S. forces to Saudi Arabia. Public Law 101-513, § 586C, prohibited the import of products from Iraq into the United States and the export of U.S. products to Iraq. Iraq War Powers Resolution authorized the president to use “all necessary means” to compel Iraq to withdraw military forces from Kuwait. Operation Desert Storm was launched: coalition operation was targeted to force Iraq to withdraw from Kuwait. Iraq announced acceptance of all relevant U.N. Security Council resolutions. U.N. Security Council Resolution 687 (Cease-Fire Resolution) Mandated that Iraq must respect the sovereignty of Kuwait and declare and destroy all ballistic missiles with a range of more than 150 kilometers as well as all weapons of mass destruction and production facilities. The U.N. Special Commission (UNSCOM) was charged with monitoring Iraqi disarmament as mandated by U.N. resolutions and to assist the International Atomic Energy Agency in nuclear monitoring efforts. Proposed the creation of an Oil for Food program and authorized an escrow account to be established by the Secretary General. Iraq rejected the terms of this resolution. Second attempt to create an Oil for Food program. Iraq rejected the terms of this resolution. Authorized transferring money produced by any Iraqi oil transaction on or after August 6, 1990, which had been deposited into the escrow account, to the states or accounts concerned as long as the oil exports took place or until sanctions were lifted. Allowed Iraq to sell $1 billion worth of oil every 90 days. Proceeds were to be used to procure foodstuffs, medicine, and material and supplies for essential civilian needs. Resolution 986 was supplemented by several U.N. resolutions over the next 7 years that extended the Oil for Food program for different periods of time and increased the amount of exported oil and imported humanitarian goods. Established the export and import monitoring system for Iraq. Signed a memorandum of understanding allowing Iraq’s export of oil to pay for food, medicine, and essential civilian supplies. Based on information provided by the Multinational Interception Force (MIF), communicated concerns about alleged smuggling of Iraqi petroleum products through Iranian territorial waters in violation of resolution 661 to the Security Council sanctions committee. Committee members asked the United States for more factual information about smuggling allegations, including the final destination and the nationality of the vessels involved. Provided briefing on the Iraqi oil smuggling allegations to the sanctions committee. Acknowledged that some vessels carrying illegal goods and oil to and from Iraq had been using the Iranian flag and territorial waters without authorization and that Iranian authorities had confiscated forged documents and manifests. Representative agreed to provide the results of the investigations to the sanctions committee once they were available. Phase I of the Oil for Food program began. Extended the term of resolution 986 another 180 days (phase II). Authorized special provision to allow Iraq to sell petroleum in a more favorable time frame. Brought the issue of Iraqi smuggling petroleum products through Iranian territorial waters to the attention of the U.N. Security Council sanctions committee. Coordinator of the Multinational Interception Force (MIF) Reported to the U.N. Security Council sanctions committee that since February 1997 there had been a dramatic increase in the number of ships smuggling petroleum from Iraq inside Iranian territorial waters. Extended the Oil for Food program another 180 days (phase III). Raised Iraq’s export ceiling of oil to about $5.3 billion per 6-month phase (phase IV). Permitted Iraq to export additional oil in the 90 days from March 5, 1998, to compensate for delayed resumption of oil production and reduced oil price. Authorized Iraq to buy $300 million worth of oil spare parts to reach the export ceiling of about $5.3 billion. Public Law 105-235, a joint resolution finding Iraq in unacceptable and material breach of its international obligations. Oct. 31, 1998 U.S. legislation: Iraq Liberation Act Public Law 105-338, § 4, authorized the president to provide assistance to Iraqi democratic opposition organizations. Iraq announced it would terminate all forms of interaction with UNSCOM and that it would halt all UNSCOM activity inside Iraq. Renewed the Oil for Food program for 6 months beyond November 26 at the higher levels established by resolution 1153. The resolution included additional oil spare parts (phase V). Following Iraq’s recurrent blocking of U.N. weapons inspectors, President Clinton ordered 4 days of air strikes against military and security targets in Iraq that contribute to Iraq’s ability to produce, store, and maintain weapons of mass destruction and potential delivery systems. President Clinton provided the status of efforts to obtain Iraq’s compliance with U.N. Security Council resolutions. He discussed the MIF report of oil smuggling out of Iraq and smuggling of other prohibited items into Iraq. Renewed the Oil for Food program another 6 months (phase VI). Permitted Iraq to export an additional amount of $3.04 billion of oil to make up for revenue deficits in phases IV and V. Extended phase VI of the Oil for Food program for 2 weeks until December 4, 1999. Extended phase VI of the Oil for Food program for 1 week until December 11, 1999. Renewed the Oil for Food program another 6 months (phase VII). Abolished Iraq’s export ceiling to purchase civilian goods. Eased restrictions on the flow of civilian goods to Iraq and streamlined the approval process for some oil industry spare parts. Also established the United Nations Monitoring, Verification and Inspection Commission (UNMOVIC). Increased oil spare parts allocation from $300 million to $600 million under phases VI and VII. Renewed the Oil for Food program another 180 days until December 5, 2000 (phase VIII). Extended the Oil for Food program another 180 days (phase IX). Ambassador Cunningham acknowledged Iraq’s illegal re-export of humanitarian supplies, oil smuggling, establishment of front companies, and payment of kickbacks to manipulate and gain from Oil for Food contracts. Also acknowledged that the United States had put holds on hundreds of Oil for Food contracts that posed dual-use concerns. Ambassador Cunningham addressed questions regarding allegations of surcharges on oil and smuggling. Acknowledged that oil industry representatives and other Security Council members provided the United States anecdotal information about Iraqi surcharges on oil sales. Also acknowledged companies claiming they were asked to pay commissions on contracts. Extended the terms of resolution 1330 (phase IX) another 30 days. Renewed the Oil for Food program an additional 150 days until November 30, 2001 (phase X). The resolution stipulated that a new Goods Review List would be adopted and that relevant procedures would be subject to refinement. Renewed the Oil for Food program another 180 days (phase XI). UNMOVIC reviewed export contracts to ensure that they contain no items on a designated list of dual-use items known as the Goods Review List. The resolution also extended the program another 180 days (phase XII). MIF reported that there had been a significant reduction in illegal oil exports from Iraq by sea over the past year but noted oil smuggling was continuing. Extended phase XII of the Oil for Food program another 9 days. Renewed the Oil for Food program another 180 days until June 3, 2003 (phase XIII). Approved changes to the list of goods subject to review by the sanctions committee. Chairman reported on a number of alleged sanctions violations noted by letters from several countries and the media from February to November 2002. Alleged incidents involved Syria, India, Liberia, Jordan, Belarus, Switzerland, Lebanon, Ukraine, and the United Arab Emirates. Operation Iraqi Freedom is launched. Coalition operation led by the United States initiated hostilities in Iraq. Adjusted the Oil for Food program and gave the Secretary General authority for 45 days to facilitate the delivery and receipt of goods contracted by the Government of Iraq for the humanitarian needs of its people. Public Law 108-11, § 1503, authorized the President to suspend the application of any provision of the Iraq Sanctions Act of 1990. Extended provisions of resolution 1472 until June 3, 2003. End of major combat operations and beginning of post-war rebuilding efforts. Lifted civilian sanctions on Iraq and provided for the end of the Oil for Food program within 6 months, transferring responsibility for the administration of any remaining program activities to the Coalition Provisional Authority (CPA). Transferred administration of the Oil for Food program to the CPA. Responded to allegations of fraud by U.N. officials that were involved in the administration of the Oil for Food program. Proposed that a special investigation be conducted by an independent panel. Supported the appointment of the independent high-level inquiry and called upon the CPA, Iraq, and member states to cooperated fully with the inquiry. The CPA transferred power to the interim Iraqi government. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Oil for Food program was established by the United Nations and Iraq in 1996 to address concerns about the humanitarian situation after international sanctions were imposed in 1990. The program allowed the Iraqi government to use the proceeds of its oil sales to pay for food, medicine, and infrastructure maintenance. The program appears to have helped the Iraqi people. From 1996 through 2001, the average daily food intake increased from 1,300 to 2,300 calories. From 1997-2002, Iraq sold more than $67 billion of oil through the program and issued $38 billion in letters of credit to purchase commodities. GAO (1) reports on our estimates of the illegal revenue acquired by the former Iraqi regime in violation of U.N. sanctions and provides some observations on the administration of the program and (2) suggests areas for additional analysis and summarizes the status of several ongoing investigations. From 1997 through 2002, we estimate that the former Iraqi regime acquired $10.1 billion in illegal revenues--$5.7 billion in oil smuggled out of Iraq and $4.4 billion in surcharges on oil sales and illicit charges from suppliers exporting goods to Iraq through the Oil for Food program. The United Nations, through the Office of the Iraq Program (OIP) and the Security Council's Iraq sanctions committee, was responsible for overseeing the Oil for Food program. However, the Security Council allowed the Iraqi government, as a sovereign entity, to negotiate contracts directly with purchasers of Iraqi oil and suppliers of commodities. This structure was an important factor in enabling Iraq to levy illegal surcharges and commissions. OIP was responsible for examining Iraqi contracts for price and value, but it is unclear how it performed this function. The sanctions committee was responsible for monitoring oil smuggling, screening contracts for items that could have military uses, and approving oil and commodity contracts. The sanctions committee took action to stop illegal surcharges on oil, but it is unclear what actions it took on the commissions on commodity contracts. U.N. external audit reports contained no findings of program fraud. Summaries of internal audit reports provided to GAO pointed to some operational concerns in procurement, coordination, monitoring, and oversight. Ongoing investigations of the Oil for Food program may wish to further examine how the structure of the program enabled the Iraqi government to obtain illegal revenues, the role of member states in monitoring and enforcing the sanctions, actions taken to reduce oil smuggling, and the responsibilities and procedures for assessing price reasonableness in commodity contracts. Current or planned efforts include an inquiry initiated by the United Nations, an investigation and audit overseen by the Iraqi Board of Supreme Audit, and efforts undertaken by several U.S. congressional committees.
The bombing of the World Trade Center in New York City in 1993 and the Murrah federal building in Oklahoma in 1995 raised concerns about the vulnerability of the states to terrorist attacks. After the 1995 attack on the Murrah building, the President established the general U.S. policy, in PDD 39, to use all appropriate means to deter, defeat, and respond to all terrorist attacks. PDD 39 directs all federal departments and agencies to take measures to (1) reduce vulnerabilities to terrorism, (2) deter and respond to terrorism, and (3) develop effective capabilities to prevent and manage the consequences of terrorism. PDD 62 (May 1998) reaffirmed PDD 39 and set up an integrated program to increase the federal government’s effectiveness in countering terrorist threats against the United States; it also clarified the roles and activities of many of the agencies responsible for combating terrorism. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (P.L. 93-288, Nov. 23, 1988), as amended, establishes the basis for federal assistance to state and local governments when they cannot adequately respond to a disaster such as a terrorist incident. After the President has declared a federal emergency, FEMA coordinates its and as many as 27 other federal agencies’ responses under the Federal Response Plan. The plan outlines how the agencies will implement the Stafford Act and contains policies and procedures to guide the conduct of operations during a federal emergency. These operations include transporting food and potable water to the area, assisting with medical aid and temporary housing, and providing generators to keep hospitals and other essential facilities working. Under FEMA’s Director, the Senior Advisor for Terrorism Preparedness, a position created in 2000, is tasked to coordinate FEMA’s overall terrorism preparedness programs and activities, including budget strategy and formulation. In planning for consequence management, the primary FEMA units involved are the Directorates for Preparedness, Training, and Exercises and for Response and Recovery; the U.S. Fire Administration; and FEMA’s regional offices. The directorates and other units are responsible for executing the terrorism-related programs and activities and control the personnel and other resources. The Senior Advisor has no direct management authority over the resources of FEMA’s directorates and other units. FEMA is responsible for leading and coordinating with 27 federal agencies on consequence management activities. These agencies include the Departments of Defense, Justice (the FBI), Energy, and Health and Human Services, and the Environmental Protection Agency. FEMA also works with the states, territories, and communities to help them develop plans for consequence management of terrorist incidents and provides grants for training and exercises to help in preparing them to deal with such incidents. FEMA’s budget for terrorism-related activities has steadily increased over the past 3 years, from $17.6 million in 1999 to $28.5 million in 2000 and about $34.0 million in 2001. A major portion of this funding, about $20 million for 2001, is in the form of grants to the states and localities. When the President declared the Oklahoma City bombing a federal emergency, FEMA served effectively as the lead federal agency responsible for consequence management. FEMA established a Regional Operations Center within an hour of the explosion. The prior FEMA Director, James L. Witt, was on the scene the first day and Urban Search and Rescue Teams began arriving within 14 hours. Because the emergency was created by a terrorist attack, however, new and distinct challenges emerged. First, the incident combined a federal crime scene with a disaster area, and second, the swift and catastrophic nature of the bombing thrust FEMA into direct contact with local authorities, causing the agency to bypass many of the customary state channels. After FEMA had completed its response activities, it assessed its and others’ actions to reflect lessons learned from the response to the bombing. The agency found that (1) unclear authority, roles, and responsibilities in the Federal Response Plan and other guidance impeded decision-making and response measures; (2) state and local response plans did not correspond to the Federal Response Plan, which affected operational coordination; and (3) almost all of the available Urban Search and Rescue Teams were used during the incident. FEMA responded to these lessons learned in several ways. To ensure that roles and responsibilities for managing the consequences of a terrorist incident are clear and to respond to PDD 39 requirements, FEMA—alone or in coordination with other federal agencies: updated the Federal Response Plan and added a Terrorism Incident Annex that includes better interagency guidance and describes federal, state, and local policies and the structure for coordinating management of the consequences of terrorist incidents; added to the Federal Response Plan four support annexes covering community relations, donations management, logistics management, and occupational health and safety and an appendix, Overview of a Disaster Operation; developed a Concept of Operations Plan to guide the overall federal response to domestic terrorist incidents and describe actions federal agencies should take nationally and locally; established a better liaison between FEMA and local FBI offices and trained staff for liaison positions; developed terrorism preparedness annexes to support FEMA regions’ response plans and provided updates of these plans to federal and state partners; and established a logistics and donations manager as part of the response structure. To increase awareness of relevant changes to the Federal Response Plan and other guidance and policies affecting consequence management, FEMA developed a planning guide to help state and local authorities update their emergency operations plans and to develop terrorism response plans that more closely mirror the federal plan and other guidance in accordance with PDD 39 and updated training courses, for example, the Integrated Emergency Management Course, to disseminate current information on plans and response capabilities related to consequence management of terrorist incidents. FEMA also provides program coordination and grants to promote the development of emergency management plans, to include terrorism consequence management, at the state and local levels. Federal grants are used to encourage state and local recipients to improve their terrorism preparedness through planning, training, and exercises. Examples of activities supported by grants include the following: development of a comprehensive terrorism preparedness document for inclusion in state emergency operations plans; review of state and local emergency plans and procedures to ensure the incorporation of current FEMA and FBI planning guidance; state terrorism task force planning; development of comprehensive terrorism preparedness training test and evaluation of state and local terrorism response plans through multiagency exercises; and distribution of terrorism preparedness handbooks and/or checklists to first responders at state and local levels. Our analysis indicates that most of the states’ emergency operations plans reflect awareness of terrorism preparedness and the federal support role. Figures 1 and 2 show, respectively, states with emergency operations plans that mirror the Federal Response Plan and states with plans that incorporate a section on terrorism preparedness. According to FEMA officials, each of the remaining states will likely complete similar updates to their plans within the next 12 months. To respond to the need for more Urban Search and Rescue Teams, FEMA has increased the number of teams from 12 at the time of the bombing, to 28 in calendar year 2000. These 28 teams are comprised of 62 specialists from 4 major functional elements—search, rescue, technical, and medicine. Search specialists use highly trained dogs to find victims under rubble, for example, and rescue specialists determine the best way to free the victims. Technical staffs deal with engineering problems, hazardous materials, heavy rigging, and logistics. The medical staff is comprised of four medical specialists who are often also firefighters and two physicians who are often emergency medicine experts. To ensure the preparedness of the states and other federal agencies to handle the consequences of terrorist incidents, FEMA has assessed the states’ response capabilities, increased terrorism preparedness training courses, provided training grants, and coordinated extensively with responsible federal agencies on terrorism issues. To ensure that states are adequately prepared for a terrorist incident, PDD 39 tasked FEMA to assess the states’ response capabilities. Initially, FEMA used the National Governor’s Association to survey the states’ capabilities. The Association’s primary fact-gathering methodology was focus group discussions with emergency first responders from four metropolitan areas. This survey, which was completed in 1995, concluded that the states’ and localities’ capabilities could easily be overwhelmed by a terrorist incident. Since then, FEMA and other agencies have worked with state and local authorities to assess the needs of local first responders. In 1996, in hearings before the Senate Committee on Appropriations, FEMA’s Director committed the agency to (1) developing national-level performance criteria to measure the capability of the states to perform in the areas of mitigation, preparedness, response, and recovery and (2) assessing the states’ capabilities to effectively respond to disasters, including terrorist incidents. Subsequently, FEMA and the National Emergency Management Association jointly developed the Capability Assessment for Readiness process and FEMA issued a report on its assessment in December 1997. In the report, FEMA concluded that the states have the basic capabilities to effectively respond to disasters but were not well prepared for a terrorist incident involving a weapon of mass destruction. The report also noted that FEMA’s Chemical Stockpile Emergency Preparedness Program (CSEPP) and Radiological Emergency Preparedness (REP) Program provide emergency management performance standards that strengthen related states’ programs. FEMA’s Terrorism Preparedness Implementation plan states that CSEPP and REP are also used to support the agency’s terrorism-preparedness efforts. (Appendix I contains a discussion of attributes of these programs’ exercises.) However, the report also identified two areas that required significant improvement: (1) planning and equipment for response to nuclear, biological, and chemical terrorist incidents and (2) coordination between state emergency management agencies and the private sector. FEMA expects to publish its fiscal year 2000 assessment report by April 2001. Since the Oklahoma City bombing, FEMA has made considerable progress in training its staff and those of other federal agencies, the states, and local entities to ensure their preparedness for a terrorist attack. The agency has developed several terrorism preparedness courses and incorporated terrorism preparedness into its emergency management curriculum. FEMA’s terrorism preparedness training funding, including grants to states and local communities, totaled $6 million in fiscal year 1998, $7.6 million in fiscal year 1999, and $10.4 million in fiscal year 2000. FEMA’s National Emergency Training Center, in Emmitsburg, Maryland, is a major provider of formal training related to consequence management. The Center offers resident training for its and other federal agencies’ personnel and provides course materials to state and local organizations. The Center includes the Emergency Management Institute and the United States Fire Administration’s National Fire Academy. The Institute serves as the national focal point for the development and delivery of emergency management training to enhance the capabilities of federal, state, and local government officials, volunteer organizations, and the private sector. Since the Institute focuses on disaster preparedness, its courses are provided to emergency managers and community-level policy officials. (Appendix II contains additional information on the Institute’s principal terrorism preparedness courses.) The National Fire Academy serves as the national focal point for fire-related and emergency management training activities. First responders from fire departments across the United States attend the Academy’s courses. FEMA uses its Integrated Emergency Management course to immerse senior public officials and emergency management personnel (see app. II, table 3) in an intense, simulated disaster environment. According to FEMA’s report on the Oklahoma City bombing, this course proved valuable to numerous Oklahoma City officials who had received the training in 1994. Furthermore, city officials praised the course trainers’ willingness to serve as on-site mentors to city decisionmakers during response and recovery operations after the bombing. After the Oklahoma City bombing incident, FEMA developed its first course specifically related to terrorism preparedness in 1996 (see table 1). This course, the Integrated Emergency Management Course: Consequences of Terrorism, incorporates all the core elements of the original Integrated Emergency Management Course, but focuses on managing terrorist incidents. Although the course was offered nine times in 2000, it is normally presented two to four times a year unless an agency other than FEMA (such as the Department of Justice) funds additional courses. FEMA performs many functions with other federal agencies and state and local officials to help prepare for managing the consequences of terrorist incidents. Chief among these functions are (1) coordination of key terrorism preparedness guidance and policy documents, (2) day-to-day coordination of operations and special events, and (3) membership in formal interagency groups and committees. FEMA and the agencies cited most prominently in PDD 39 (the Departments of Defense, Energy, and Health and Human Services and the Environmental Protection Agency) coordinate with the FBI on its Domestic Guidelines and on its Concept of Operations Plan. The FBI’s guidelines are a road map for government agencies’ mobilization, deployment, and use—under PDD 39—in response to a terrorist threat or incident. The FBI’s Concept of Operations Plan will guide how the federal government is structured to respond to domestic terrorism incidents. The agencies listed above are now doing a final review of the Plan before the FBI issues it as formal guidance. FEMA also developed the State and Local Guide 101 for All-Hazard Emergency Operations Planning (1996) for state and local emergency management agencies to use in developing and updating risk-based, all-hazard emergency operations plans. These plans are the basis for an effective response to any emergency and facilitate coordination with the federal government during catastrophic disasters that require implementation of the Federal Response Plan. The guide describes core functions such as communications, evacuation, mass care, health and medical services, and resource management, as well as unique planning considerations for earthquakes, hurricanes, flooding, and hazardous materials. A new component of State and Local Guide 101, Attachment G: Terrorism, is now being coordinated through the National Security Council’s Domestic Contingency Planning and Exercises Subgroup and the National Emergency Management Association, and with the International Association of Emergency Managers. It is intended to aid state and local planners in developing and maintaining an appendix to their emergency operations plans on incidents involving terrorists’ use of weapons of mass destruction. The attachment addresses various hazards, a concept of operations, organizational responsibilities, logistics, and administrative issues. FEMA expects to publish the attachment on March 30, 2001. Under the auspices of the National Security Council, FEMA and other agencies coordinate to provide the appropriate preparedness response at important events that may present an attractive target for terrorist attack. Through its active role in this process, FEMA has the opportunity to coordinate and practice with federal, state, and local agencies involved in consequence management. During the past 2 years, FEMA has participated in 17 special events, ranging from high-profile athletic competitions to international conferences (see table 2 for examples). FEMA is a member of numerous interagency groups related to preparedness for domestic terrorism. It participates in the National Security Council’s Weapons of Mass Destruction Preparedness Group and two of its subgroups—the Assistance to State and Local Authorities Group and the Contingency Planning and Exercises Group. FEMA maintains a formal liaison with the National Domestic Preparedness Office and supports the Domestic Preparedness Leadership Group and the State and Local Advisory Group. FEMA supports and coordinates with the Department of Justice on its programs for terrorism preparedness training activities, the state and local capabilities assessment project, and the equipment grant program. It also coordinates with and provides support to the Departments of Defense and Justice program managers on the Nunn- Lugar-Domenici Domestic Preparedness Program and participates in the Multi-Agency Task Force on Nunn-Lugar-Domenici Exercises, which develops policy for domestic preparedness exercises. FEMA also serves on the Secretary of Defense’s Weapons of Mass Destruction Advisory Panel, the FBI/Department of State’s Interagency Working Group on Domestic/International Counter Terrorism Exercises, and the national and regional response teams concerned with hazardous material and oil spills. FEMA exercises an active leadership role in terrorism consequence management planning. At the national level, it coordinates federal response planning through the Emergency Support Function Leaders Group, the Catastrophic Disaster Response Group (comprising the 27 signatories of the Federal Response Plan), and the Concept Plan Working Group. FEMA issues the National Exercise Schedule after compiling and coordinating information from federal departments and agencies with emergency management responsibilities. In coordination with applicable federal departments and agencies, FEMA also assessed the capabilities of federal agencies to provide consequence management in an incident involving weapons of mass destruction. FEMA and the other agencies identified key critical areas that needed to be addressed, including the need for baseline information on capabilities; combined federal, state, and local planning; and timely federal augmentation of local authorities. The overall results of this assessment were reported in 1997. At the regional level, FEMA regional offices coordinate consequence management planning through Regional Interagency Steering Committees. These Committees are comprised of regional representatives from essential response agencies and are responsible for coordinating regional response plans with the Federal Response Plan. Memorandums of understanding between each state and its FEMA regional office are supplemented by the regional response plans. PDD 39 requires FEMA to ensure that states’ terrorism response preparedness plans are adequate and tested, and the agency has made progress in meeting this requirement. Through FEMA’s and other agencies’ efforts, the types, numbers, and complexity of terrorism preparedness exercises to test states’ response plans have increased significantly over the past 5 years (see fig. 3). FEMA provides grants to the states and six U.S. jurisdictions to help them develop and test their plans. For example, FEMA sponsored 22 of the 28 exercises conducted in the state of Washington during 1996-2000. These exercises employed chemical, biological, radiological, nuclear, conventional high explosive, and combination threat scenarios while highlighting crises and consequence management activities. In tabletop exercises, participants discuss how their agency or unit might react to a scenario or series of scenarios. These exercises emphasize higher level policy and procedural issues and frequently include more senior-level agency officials. There is no actual deployment of personnel or equipment for tabletop exercises; rather, they are held in a classroom-type setting. Functional exercises are not conducted solely in a classroom environment and generally test an operational function, such as an evaluation of interagency emergency operations capability and response. Full-scale exercises, which are primarily conducted in the field, evaluate operations over an extended period. For field exercises, personnel and their equipment are actually deployed to a field setting where they practice tactics, techniques, and procedures that would be used in a real incident; thus, they are the most realistic of the exercises. During 1996-2000, FEMA led or co-led 19 percent of the terrorism preparedness exercises in which it participated. Most of the exercises (70 percent) were of the tabletop type; 30 percent were either functional or full-scale. Figure 4 reflects the focus of the exercises. In May 2000, in responding to a congressional mandate that a national combating terrorism field exercise be conducted, FEMA joined with the Department of Justice to sponsor TOPOFF (top officials) 2000. TOPOFF 2000 was a large-scale, “no-notice exercise” of federal, state, and local organizations’, including the American Red Cross, plans, policies, procedures, systems, and facilities to assess the nation’s crisis and consequence management capability. In Denver, Colorado, the exercise involved a biological weapons incident, and in Portsmouth, New Hampshire, the exercise involved a chemical incident. In addition, NCR 2000 (National Capital Region), a separate but concurrent exercise, was a no-notice exercise of an incident that involved simulated mass casualties and highlighted the use of radiological devices. (Fig. 5 shows a decontamination team during the exercise.) NCR 2000 consisted of previously planned exercises that complemented the TOPOFF 2000 activities but did not involve agencies’ top officials. An assessment of the benefits of these exercises was under way but not available at the time of our review. During the last 5 years, FEMA has also conducted a series of functional exercises for community-based public officials and emergency personnel as part of its Integrated Emergency Management Course: Consequences of Terrorism. Through the simulation of a realistic crisis scenario, participants are exposed to an increasingly complex and stressful situation within a structured learning environment. The course culminates in an emergency exercise designed to test leadership, knowledge, awareness, and interpersonal skills. Figure 6 shows dispatchers participating in an exercise during the course at the Mount Weather Emergency Assistance Center. (See app. II for additional information on the course.) We provided a draft of this report to FEMA for its review and comment. FEMA agreed with the report’s characterization of its terrorism-related activities and provided technical comments for our consideration. We incorporated technical comments as appropriate. A copy of FEMA’s letter is included in appendix III. To determine the extent FEMA has incorporated lessons learned from its response to the Oklahoma City bombing incident, we reviewed FEMA’s after-action report and the after-action report prepared by the Oklahoma Department of Civil Emergency Management. To determine the actions taken to address the lessons learned, we interviewed senior FEMA officials and officials in the Preparedness and Response and Recovery Directorates, using a survey instrument keyed to the 3 broad and 22 specific recommendations contained in the FEMA report. FEMA’s Region VI Director, who coordinated federal operations after the bombing, provided a written response to our questions. We also identified and reviewed several actions that FEMA and its partner federal agencies implemented to improve its response to terrorist incidents, for example, the revisions to the Federal Response Plan, the addition of a Terrorism Incident Annex, and improvements to the terrorism training program. We also surveyed FEMA’s regions and the states to determine whether the states’ and localities’ emergency operations plans are current, mirror the Federal Response Plan, and incorporate a section on terrorism. To determine the extent to which FEMA has ensured the preparedness of states and federal agencies to respond to terrorist incidents, we reviewed our prior work on combating terrorism, FEMA’s strategic plan, annual performance plans and reports, and the Terrorism Preparedness Strategic Plan. We also reviewed PDDs 39 and 62 and discussed their requirements with top FEMA officials relative to the Federal Response Plan and its Terrorism Incident Annex, FEMA’s budget for consequence management, the State and Local Guide for All-Hazard Emergency Operations Planning and its draft section on unique planning considerations for terrorism incidents, special events’ operational plans, and the Capability Assessment for Readiness report. We also reviewed FEMA’s terrorism grants program, including several state grant proposals and reports. To determine progress in the terrorism preparedness training since the Oklahoma City bombing, we visited and interviewed senior agency officials at the National Emergency Training Center, including the Emergency Management Institute and the National Fire Academy, in Emmitsburg, Maryland. To assess the dispersion and density of FEMA’s training program coverage, we used a geographic information systems program to map students’ city or zip codes for three selected courses. To assess FEMA’s progress in ensuring that states’ response plans are adequate and tested, we reviewed our prior work on terrorism preparedness exercises. We analyzed the numbers, types, and threat scenarios of terrorism exercises conducted in the states since 1995. We also discussed the nature, scope, and extent of the terrorism exercise program with several state program managers for the emergency management of terrorist incidents and exercise directors. We interviewed and obtained exercise program data from officials at FEMA headquarters. During our visit to the National Emergency Training Center, we observed a terrorism consequence management exercise conducted as a part of FEMA’s Integrated Emergency Management Course: Consequences of Terrorism. We also discussed the course and exercise with some of its participants. We also examined policies, program plans, guidelines, and handbooks; exercise plans and reports; and training course materials. We attended NCR 2000 controller/observer training and observed TOPOFF 2000 and NCR 2000 exercise operations in the FEMA emergency operations center and the Catastrophic Disaster Response Group. We performed our work from March through December 2000 in accordance with generally accepted government auditing standards. Unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to appropriate congressional committees and the federal agencies discussed in this report. We will also make copies available to other interested parties upon request. If you have any questions about this report, please contact me at (202) 512-6020. GAO contacts and staff acknowledgments to this report are listed in appendix IV. The Chemical Stockpile Emergency Preparedness Program (CSEPP) and the Radiological Emergency Preparedness (REP) Program are Federal Emergency Management Agency (FEMA) programs that conduct consequence management exercises. CSEPP and REP exercises (1) have clearly defined objectives, (2) are resourced with both headquarters and field staff involvement, (3) have consistent schedules and assessment programs, and (4) build on lessons learned through after-action reporting. CSEPP and REP cover 10 and 32 states, respectively, and together conduct about 40 exercises per year. In 1985, Congress directed the Department of Defense to dispose of its pre- mixed (i.e., lethal unitary) chemical agents and munitions while providing “maximum protection for the environment, the general public and the personnel involved in the destruction of lethal chemical agents and munitions … .” Ten states (8 with storage facilities) and 40 counties are involved. In response to congressional direction, the Army sought funds to support a site-specific emergency planning program for communities located near the bases within those 10 states that could be affected by the release of chemicals during storage or destruction. Because the Army had little experience dealing with state and local emergency management authorities and possessed no infrastructure to manage the program, it looked for support from other federal agencies, specifically FEMA, to help meet the mandate. Therefore, FEMA joined the Army in implementing CSEPP through a Memorandum of Understanding signed in August 1988. CSEPP’s goal is to improve preparedness to protect the people of these communities in the event of an accident involving U.S. stockpiles of obsolete chemical munitions. The Memorandum of Understanding identified the specific responsibilities of the Army and FEMA, defining areas of expertise and outlining where cooperation would result in a more efficient use of personnel and resources. FEMA is responsible for developing preparedness plans, upgrading response capabilities, and conducting training for communities located near the Army bases. Local and state emergency services, along with public health, environmental, fire and rescue, law enforcement, and medical service agencies, have major roles, as do elected and appointed officials. The Army and FEMA provide funding, training, guidance, technical support, and expertise. Other federal agencies, including the Environmental Protection Agency and the Department of Health and Human Services, also lend their expertise in specific areas. CSEPP provides planning, training, equipment, emergency operations centers, command and control systems, personnel, cooperative agreement funds, exercises, and more. FEMA administers the local community portion of the program primarily through its regional offices. Each region has a CSEPP program manager. FEMA serves as CSEPP exercise co- director in each region and takes the lead in planning, conducting, evaluating, reporting, and tracking identified findings. CSEPP funds pay for over 200 staff at the state and county levels, including planners, trainers, health and automation experts, and logistical personnel. Comprehensive planning guidance is contained in FEMA’s Planning Guidance for the Chemical Stockpile Emergency Preparedness Program. CSEPP was established to test local, installation, and state emergency operations plans and the jurisdictions’ capabilities to implement those plans. The program is governed by the Exercise Policy and Guidance for the Chemical Stockpile Emergency Preparedness Program. Exercises are generally conducted on an annual basis at each location. Through 1999, 62 CSEPP exercises had been conducted. For many of the state and local jurisdictions, CSEPP’s comprehensive, multijurisdictional exercise program was a new concept. Before CSEPP, communities exercised their emergency preparedness capabilities; however, exercises were generally focused on first responder fire or hazardous materials communities. Thus, multijurisdictional exercises were the exception, rather than the norm. CSEPP included two types of exercises, the Federally Managed Exercise and the Alternate Year Exercise. Localities may conduct additional exercises. The Federally Managed Exercise is a mandatory, federally evaluated readiness assessment of a community’s full capabilities to respond to a chemical stockpile accident. This exercise tests the entire emergency response effort and evaluates interaction of all components. It involves mobilization of emergency service and response agencies, activation of communications centers and emergency facilities, such as emergency operations centers and command posts, and field play. An Alternate Year Exercise is used by a community to train participants, evaluate emergency operations plans, evaluate procedures for new equipment or resources, validate corrections to outstanding findings, and address other issues. A community may request varying levels of federal support or management. Many lessons have been learned from the exercises. For instance, FEMA has learned that communication between installations and nearby communities has improved considerably over the years and that assessing threat and meeting notification times for nearby communities has been difficult. The information gained from post-exercise reports allows planners to focus exercises on areas requiring greater attention. Every exercise evaluation ends with a meeting in which exercise evaluators provide immediate feedback to the community. Further, a 45-day review and comment period is provided prior to finalization and distribution of the exercise report, which consists of a plan negotiated by regional, state, and local officials to correct deficiencies and identify responsibility for corrective actions. Problems noted during exercises are addressed in future planning and training activities. FEMA is the lead federal agency for planning and preparedness for all types of peacetime radiological emergencies, including accidents at commercial nuclear power plants. Dating back to the incident at Three Mile Island in 1979, FEMA has worked with state and local governments to ensure that emergency preparedness plans are in place for U.S. commercial nuclear power plants. FEMA issues policy and guidance to assist state and local governments in developing and implementing their radiological emergency response plans and procedures. Much of this FEMA guidance is developed with the assistance of the Federal Radiological Preparedness Coordinating Committee and its member agencies. REP has a goal of ensuring that the public health and safety of residents living around commercial nuclear power plants are adequately protected in the event of an accident. The program’s responsibilities encompass only “off-site” activities—that is state and local government emergency preparedness activities that take place beyond the nuclear power plant’s boundaries. On-site activities continue to be the responsibility of the Nuclear Regulatory Commission. reviewing and evaluating off-site radiological emergency response plans developed by state and local governments; evaluating exercises conducted by state and local governments to determine whether plans are adequate and can be implemented; preparing findings and making determinations on the adequacy of off-site emergency planning and preparedness and submitting them to the Nuclear Regulatory Commission; responding to requests by the Nuclear Regulatory Commission under the Memorandum of Understanding between the Commission and FEMA dated June 17, 1993; coordinating the activities of more than a dozen federal agencies with responsibilities in the radiological emergency planning process; and chairing the Federal Radiological Preparedness Coordinating Committee and the Regional Assistance Committee. REP evaluates the adequacy of state and local emergency preparedness plans during regular exercises. REP exercises are designed to test the capability of off-site response organizations to protect the public health and safety through the implementation of their emergency response plans and procedures under simulated accident conditions. FEMA’s Radiological Emergency Preparedness Exercise Manual and the Radiological Emergency Preparedness Exercise Evaluation Methodology serve as the principal documents that FEMA uses in all aspects of REP exercises. According to FEMA officials, these documents have been valuable tools for assessing the adequacy and implementation of state and local governments’ radiological emergency preparedness plans and procedures. The exercise manual provides guidance for planning and conducting REP exercises. It provides basic guidance for the interpretation and application of planning standards and evaluation criteria. These standards and criteria are included in 33 REP objectives that are to be demonstrated by the off-site response organizations at the biennial REP exercises. The exercise objectives address the off-site response organization’s capability to carry out specific radiological emergency functions such as communications, mobilization of emergency response personnel, accident assessment, protective action decision-making and implementation, public alerting and notification, and evacuee monitoring and decontamination. Similarly, the exercise evaluation methodology assists FEMA and other federal agencies in the uniform and consistent documentation of the performance of the off-site response organizations during REP exercises. The REP methodology document contains a set of 33 multipage evaluation forms, 1 for each of the 33 REP objectives delineated in the exercise manual. Each evaluation form consists of a series of short questions or prompts (points of review) for each REP objective to facilitate the exercise evaluator’s systematic collection and documentation of essential data and information required by FEMA. This information provides the basis for FEMA findings and determinations on the adequacy of plans and preparedness that are submitted to the Nuclear Regulatory Commission for consideration in licensing decisions. Figures 7 and 8 show the level of program funding for the FEMA exercise program and provide indicators for the level of effort required for an exercise program. FEMA has developed and expanded a terrorism preparedness curriculum involving several of its organizations. FEMA’s Emergency Management Institute, which delivers numerous all-hazards emergency response and related courses, also delivers several courses that focus on the implications of terrorism incidents for emergency management. Similarly, FEMA’s National Fire Academy, part of the United States Fire Administration, has developed a series of courses addressing emergency response for terrorism incidents. These courses are primarily for delivery to first fire and rescue responders and to incident commanders. As part of its all-hazards emergency response and recovery curriculum, the Emergency Management Institute has developed and delivered numerous emergency response, incident command, and related courses. These courses are offered to federal, state, and local organizations and personnel. The Institute also offers a number of courses that incorporate terrorism preparedness elements. Some of these courses are focused on the Community Emergency Response Team, Radiological Emergency Response Operations, Incident Command System, exercise design, and Mass Fatalities Incident Response. FEMA has also developed a course, Terrorism and Emergency Management, as part of its Higher Education Project. Through the National Fire Academy, FEMA provides several courses in the Emergency Response to Terrorism curriculum. The Institute delivered its first terrorism preparedness course, the Integrated Emergency Management Course: Consequences of Terrorism, in 1996. Since then, the Institute has incorporated terrorism preparedness in its courses as part of the all-hazards approach. Following are other terrorism preparedness courses developed and offered by the Institute: Emergency Response to Criminal and Terrorist Incidents. A 1-day course that focuses on the interface between law enforcement authorities and emergency management system personnel. It addresses topics such as lifesaving and evidence preservation. This course can be taught by local officials using Institute materials. Senior Officials Workshop on Terrorism. A 1-day course that addresses special planning and policy considerations related to terrorism preparedness. The workshop is conducted on location, with a 3-hour instructional module followed by a 3-hour exercise. The target audience is the mayor and other senior management officials. (Fig. 9 highlights the locations where officials have received this training.) Weapons of Mass Destruction Course. A series of facilitator-led courses intended to improve the ability of senior local government officials to manage and respond to mass casualty terrorism incidents involving the use of weapons of mass destruction. Each course in the series incorporates the same five objectives, with a different weapons of mass destruction scenario introduced during each course. The scenarios include incidents involving nuclear, radiological, chemical, and biological agents or devices. (Fig. 10 shows where this course was given during 1996-June 2000.) This exercise-based course focuses on preparing local community officials who must respond to the consequences of a terrorist act. The Integrated Emergency Management Course: Consequences of Terrorism is presented at the Institute and on location. Two versions are offered based on the audience. A general iteration is presented to local officials from different venues, while a more tailored program is presented to officials from the same city or community. Table 3 provides a nominal list of the participants for the tailored course. Prior to presenting the tailored version on site, the Institute sends an advance team to the receiving location to review its Emergency Operations Plan and design the exercise phase based on the actual environment. Classroom instruction, planning sessions, and exercises are intended to allow for structured decision-making in a realistic environment. Special emphasis is placed on the fact that the disaster area is also a crime scene. In addition to the actual exercise of plans and procedures, participants’ skills and abilities are tested. As shown in figure 11, the course has reached a wide audience throughout the nation. To facilitate its training program, FEMA has increased the use of independent study courses and the Internet. FEMA has also implemented a satellite-based distance learning system, the Emergency Education NETwork, that can provide interactive training programs to communities nationwide. The United States Fire Administration is responsible for numerous emergency management activities, including disaster planning, community preparedness, hazard mitigation, and training. In addition to its more traditional role, the Fire Administration is also an active participant in the preparation for and fight against terrorism. The Fire Administration participates as an active member of the FEMA federal response team and its staff members support many of the Federal Response Plan activities. The National Fire Academy, part of the Fire Administration, works to enhance the ability of fire and emergency services and allied professionals to deal more effectively with fire and related emergencies. Along with its federal partners and response shareholders, the Academy has developed a series of courses for delivery to first fire and rescue responders. The Academy has a number of course delivery systems. On the Emmitsburg, Maryland, campus, the Academy conducts specialized training courses and national-level advanced management programs. The Academy also delivers courses throughout the nation in cooperation with state and local fire training organizations and local colleges and universities. Students can attend courses within their geographical regions through the Academy’s off-campus, Regional Delivery Program. Through a cooperative working relationship with state and local fire training systems, the Academy’s Train-the-Trainer Program provides expanded opportunities for fire service personnel to participate in Academy courses. Personnel of the four branches of the armed services also participate in this program at the state and local level. The Academy began developing its initial Emergency Response to Terrorism courses for firefighters in fiscal year 1996 and delivered its initial course in fiscal year 1997. The numbers of courses have steadily increased. Currently, seven different Emergency Response to Terrorism courses are offered (see table 4). According to FEMA officials, other courses are under development. In addition to those named above, Nadine Furr, Jay Willer, and Judy Clausen made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
GAO reviewed the Federal Emergency Management Agency's (FEMA) actions to improve its capabilities to respond to terrorist incidents based on its response to lessons learned from the Oklahoma City bombing, requirements in Presidential Decision Directives 39 and 62, and its own guidance. Specifically, GAO determined the extent to which FEMA has (1) incorporated the lessons learned from the aftermath of the Oklahoma City bombing, (2) ensured the preparedness of states and federal agencies to respond to terrorist incidents, and (3) ensured that states' plans are tested through exercises. GAO found that FEMA (1) has made across the board improvements in those areas identified as needing action after the Oklahoma City bombing, (2) updated the Federal Response Plan to address how federal agencies, states, and localities would work together to respond to an act of terrorism, and (3) assessed states' capabilities for consequence management in 1995 and set up a system to continue monitoring those capabilities.
GPO’s mission includes both printing government documents and disseminating them to the public. Under 44 U.S.C. 501, it is the principal agent for printing for the federal government. All printing for the Congress, the executive branch, and the judiciary—except for the Supreme Court—is to be done or contracted by GPO except for authorized exemptions. The Superintendent of Documents, who heads GPO’s Information Dissemination organization, disseminates these government products to the public through a system of 1,200 depository libraries nationwide (the Federal Depository Library Program), GPO’s Web site (GPO Access), telephone and fax ordering, an online ordering site, and its bookstore in Washington, D.C. The Superintendent of Documents is also responsible for classification and bibliographic control of tangible and electronic government publications. Printing and related services. In providing printing and binding services to the government, GPO generally dedicates its in-house printing equipment to congressional printing, contracting out most printing for the executive branch. Table 1 shows the costs of these services in fiscal year 2003, as well as the source of these printing services. Printing and binding for the Congress are funded by appropriations; in fiscal year 2004, this appropriation was $90.6 million, and the amount requested for fiscal year 2005 is $88.8 million. Documents printed for the Congress include the Congressional Record, hearing transcripts, bills, resolutions, amendments, and committee reports, among other things. GPO also provides publishing support staff to the Congress; these support staff mainly perform print preparation activities, such as typing, scanning, proofreading, and preparation of electronic data for transmission to GPO. In addition, GPO provides electronic copies of the Congressional Record and other documents to the Congress, the public, and the depository libraries in accordance with the Government Printing Office Electronic Information Access Enhancement Act of 1993. GPO generally provides printing services to federal agencies through contracting. GPO procures about 83 percent of printing for federal agencies from private contractors and does the remaining 17 percent at its own plant facilities. Most of the procured printing jobs (85 percent for the period from June 2002 to May 2003) were for under $2,500 each. There is no appropriation to cover federal agency printing services. Instead, GPO levies a service charge to federal agency customers of its procurement services. The service charge is GPO’s only authorized source of funds to pay for the services it provides to agencies. The service charge is intended to cover the cost of specialized printing procurement services that GPO provides to agencies. These services include developing printing specifications and providing quality assurance functions, both of which require printing expertise that agencies often do not have. Procuring printing is more specialized than general procurement, because all printing jobs are custom: that is, printing cannot be bought “off the shelf,” like furniture or office supplies. Developing printing specifications requires specialized knowledge of paper and ink qualities, printing presses, and printing processes, for example. Besides printing, GPO provides a range of related services to agencies, including, for example, CD-ROM development and production, archiving/storage, converting products to electronic format, Web hosting, and Web page design and development. Dissemination of government information. The Superintendent of Documents is responsible for the acquisition, classification, dissemination, and bibliographic control of tangible and electronic government publications. Regardless of the printing source, Title 44 requires that federal agencies make all their publications available to the Superintendent of Documents for cataloging and distribution. The Superintendent of Documents manages a number of programs related to distribution, including the Federal Depository Library Program (FDLP), which designates libraries across the country to receive copies of government publications for public use. Generally, documents distributed to the libraries are those that contain information regarding U.S. government activities or are important reference publications. GPO evaluates documents to determine whether they should be disseminated to the depository libraries. When documents are printed through GPO, it evaluates them at the time of printing; if documents are not printed through GPO, Title 44 requires agencies to notify it of these documents, so that it can evaluate them and arrange to receive any copies needed for distribution. A relatively small percentage of the items printed through GPO for the executive branch are designated as depository items. Another distribution program under the Superintendent of Documents is the Document Sales Service, which purchases, warehouses, sells, and distributes government documents. Publications are sold by mail, telephone, and fax; through GPO’s online bookstore; and at its bookstore in Washington, D.C. The Superintendent of Documents is also responsible for GPO’s Web site, GPO Access, which is one mechanism for electronic dissemination of government documents to the public through links to over 240,000 individual titles on GPO’s servers and other federal Web sites. More than 1.6 billion documents have been retrieved by the public from GPO Access since August 1994; almost 372 million downloads of government information from GPO Access were made in fiscal year 2002 alone. About two-thirds of new FDLP titles are available online. Current industry trends show that the total volume of printed material has been declining for the past few years and that this trend is expected to continue. A major factor in this declining volume is the use of electronic media options. More organizations are creating electronic documents for dissemination or publishing their information directly to the Web. The reason for the switch to electronic publishing and dissemination is that once a document is created electronically, the costs associated with reproducing and distributing paper copies of it are greater than the costs of providing online access to it. Therefore, many organizations are making information available electronically and printing fewer documents, moving away from print-centric processes. The move to electronic dissemination is the latest phase in the electronic publishing revolution that has transformed the printing industry in recent decades. This revolution was driven by the development of increasingly sophisticated electronic publishing (or “desktop publishing”) software, run on personal computers, that allows users to design documents including both images and text, and the parallel development of electronic laser printer/copier technology with capabilities that approach those of high-end presses. These tools allow users to produce documents that formerly would have required hand work, professional printing expertise, and large printing systems. These technologies have brought major economic and industrial changes to the printing industry. As electronic publishing software becomes increasingly sophisticated, user-friendly, and reliable, it approaches the ideal of the print customer being able to produce files that can be reproduced on the press with little or no intervention by printing professionals As the printing process is simplified, the customer can take responsibility for more of the work. Thus, the technologies diminish the value that printing organizations like GPO add to the printing process, particularly for simpler printing jobs. Nonetheless, professional expertise remains critical for many aspects of printing, and for many print jobs it is still not possible to bypass the printing professional altogether. The advent of the Web and the Internet, however, permits the instantaneous distribution of the electronic documents produced by the new publishing processes, breaking the link between printing and dissemination. As the Web has become virtually ubiquitous, the electronic dissemination of information becomes not only practical, but more economical than dissemination on paper. As a result, many organizations are changing from a print to an electronic focus. In the early stages of the electronic publishing revolution, organizations tended to prepare a document for printing and then convert the print layout to electronic form—in other words, focusing on printing rather than dissemination. Increasingly, however, organizations are changing their focus to providing information—not necessarily on paper. Today an organization may employ computers to generate plates used for printing as well as electronic files for dissemination. Tomorrow, the organization may create only an electronic representation of the information, which can be disseminated through various media, such as Web sites. A printed version would be produced only upon request. GPO’s Public Printer—confirmed by the Senate in November 2002— has initiated efforts to modernize and prepare GPO for the 21st century. The Public Printer has initiated a reorganization with a chief executive officer (Public Printer), chief operating officer, and managing directors in addition to the Superintendent of Documents. The Public Printer and his management team also reorganized the agency into three customer-focused functional areas (Customer Services, Information Dissemination, and Plant Operations) and three support areas (Information Technology and Systems, Finance and Administration, and Human Resources). According to GPO, this interim restructuring will be used during a 2-year transitional phase. During this time, further decisions will be made about its future and organizational alignment. According to GPO officials, the Public Printer has also initiated efforts to develop a strategic plan to guide its transformation efforts. These efforts include ● conducting fact-finding activities to support plan development, ● convening meetings of top management to discuss and document the “as-is” state of the organization, and ● finalizing the plan by December 2004. In keeping with overall industry trends, the volume of material provided to GPO to print has diminished in recent years and is creating financial challenges for the agency. According to GPO, its federal agency print jobs at one time generated close to $1 billion a year. In fiscal year 2003, the amount was just over half of that—$570 million. Federal agencies are publishing more items directly to the Web—without creating paper documents at all—and are doing more of their printing and dissemination of information without using GPO services. This reduction in demand has resulted in GPO’s procured printing business, which was once financially self- sustaining, experiencing losses in 3 of the past 5 years, with a net loss of $15.8 million over that period. Similar changes have affected its sales program. The introduction of GPO Access, which allows downloading and printing of documents at no cost, has contributed to major losses to the sales program in recent years. The availability of free government documents for downloading is a boon to the public, but it clearly affects GPO’s ability to generate sales revenue. According to the Superintendent of Documents, GPO sold 35,000 subscriptions to the Federal Register 10 years ago and now sells 2,500; at the same time, over 4 million Federal Register documents are downloaded each month from GPO Access. The Superintendent also reported that the overall volume of sales has dropped from 24.3 million copies sold in fiscal year 1993 to 4.4 million copies sold in fiscal year 2002. The sales program has operated at a loss for the past 5 years, with a net loss of $77.1 million over that period, $20 million in fiscal year 2003 alone. According to GPO, these losses are due to a downward trend in customer demand for printed publications that has significantly reduced program revenues. Ongoing technological changes are also creating challenges for GPO’s longstanding structure for centralized printing and dissemination. As mentioned earlier, the requirement in Title 44 that agencies notify GPO of their published documents (if they used other printing sources) allows it to review agency documents to determine whether the documents should be disseminated to the depository libraries. If they should be, GPO can then add a rider to the agency’s print contract to obtain the number of copies that it needs for dissemination. However, if agencies do not notify it of their intent to print, these documents remain unknown, becoming “fugitive documents” which may not be available to the public through the depository library program. In responding to our surveys, executive branch agencies reported that while printing requirements are declining, they are producing a significant portion of their total volume internally, generally on desktop publishing and reproduction equipment instead of large- scale printing equipment. In addition, while most agencies (16 of 21) reported that they have established procedures to ensure that documents that should be disseminated through the libraries are forwarded to GPO, 5 of 21 did not have such procedures, thus potentially adding to the fugitive document problem. Responding agencies also reported that although currently more government documents are still being printed than are being published electronically, publishing documents directly to the Web is increasing and expected to grow further in the future. Most agencies reported that documents currently published directly to the Web were not of the type that is required to be sent to GPO for dissemination. However, of the 5 agencies that did publish eligible documents electronically, only 1 said that it had submitted these documents to GPO. As electronic publishing continues to grow, such conditions may contribute further to the fugitive document problem. Finally, the ongoing agency shift toward electronic publishing is also creating challenges for GPO’s existing relationships with its executive branch customers. In responding to our surveys, executive branch agencies expressed overall satisfaction with GPO’s products and services and expressed a desire to continue to use these services for at least part of their publishing needs. However, these agencies reported a few areas in which GPO could improve— for example, in the presentation of new products and services. Further, some agencies indicated that they were less familiar with and less likely to use GPO’s electronic products and services. Specifically, these agencies were hardly or not at all familiar with services such as Web page design and development (8 of 28), Web hosting services (8 of 29), and electronic publishing services (5 of 28). As a consequence, these agencies were also less likely to use these services. With the expected growth in electronic publishing and other services, making customer agencies fully aware of its capabilities in these areas is important. The Public Printer and his leadership team recognize the challenges that they face in this changing environment and have embarked upon an ambitious effort to transform the agency. First and foremost, the Public Printer agrees with the need to reexamine the mission and focus of the agency within the context of technological change that is occurring. To assist in that process, our panel of printing and dissemination experts developed a series of options for GPO to consider in its planning. In summary, these options were as follows: ● Focus its mission on information dissemination as its primary goal, rather than printing. The panel suggested that GPO first needs to create a new vision of itself as a disseminator of information, not only a printer of documents. As one panel member put it, GPO should end up resembling a bank of information rather than a mint that stamps paper. Further, the panel suggested that GPO develop a business plan that emphasizes direct electronic dissemination methods over distribution of paper documents. The panel suggested that the plan also address (1) improving its Web site, GPO Access, (2) investigating methods to “push” information and documents into the hands of those that need them, (3) modernizing its production processes to publish electronically and print only when necessary, (4) promoting the use of metadata— descriptive information about the data provided—as a requirement for electronic publishing, and (5) providing increased support for the federal depository libraries’ role in providing access to electronically disseminated government information. ● Demonstrate value to customers and the public. The panel agreed that while GPO appears to provide value to agencies because of its expertise in printing and dissemination, it is not clear that agencies and the general public realize this. Therefore, GPO needs to collect data to show that, in fact, it can provide value in printing documents, providing expert assistance in electronic dissemination, and disseminating information to the public. ● Establish partnerships with collaborating and customer agencies. According to the panel, GPO should establish partnerships with other information dissemination agencies to coordinate standards and best practices for digitizing documents and to archive documents in order to keep them permanently available to the public. In addition, the panel suggested that GPO improve and expand its partnerships with customer agencies. While most agencies recognize GPO as a resource for printing documents, it now has the capability to assist in the collection and dissemination of electronic information. ● Improve internal operations. The panel suggested that GPO would need to improve its internal operations to be successful in the very competitive printing and dissemination marketplace. For example, panel members suggested that GPO hire a chief technology officer (in addition to its chief information officer), who would focus on bringing in new printing and dissemination technologies while maintaining older technologies. GPO officials responded positively to these results, commenting that that the panel’s suggestions dovetail well with their own assessments. In addition, these officials stated that they are using the results of the panel as a key part of the agency’s ongoing strategic planning process. GPO also has taken a number of steps to address the issues raised by the expert panel. Specifically: ● GPO has established an Office of New Business Development that is to develop new products and service ideas that will result in increased revenues. GPO officials stated that they are using the results of the panel discussion to categorize and prioritize their initial compilation of ideas and, in this context, plan to assess how these ideas would improve operations and revenue. ● Regarding GPO’s mission to disseminate information, GPO officials stated that its Office of Innovation and New Technology, established in early 2003, is leading an effort to transform GPO into an agency “at the cutting edge of multichannel information dissemination.” A major goal in this effort is to disseminate information while still addressing the need “to electronically preserve, authenticate, and version the documents of our democracy.” In addition, the Public Printer has been added to the oversight committee of the National Digital Information Infrastructure and Preservation Program, a national cooperative effort to archive and preserve digital information, led by the Library of Congress. ● Further, to address the adequacy of its internal functions, GPO’s Deputy Chief of Staff stated that the agency is in the process of searching for a chief technology officer, with the intention that the current chief information officer will focus primarily on internal business processes, and the chief technology officer will focus on identifying the specific technology solutions needed to support its printing and dissemination mission. These efforts are valuable first steps that, if properly followed through and implemented, should contribute to the success of GPO’s transformation. The Public Printer recognizes that to successfully transform, GPO will have to ensure that it strategically manages its people. At the center of any serious change management initiative are the people. Thus the key to a successful transformation is to recognize the people element and implement strategies to help individuals maximize their full potential in the organization. In our October 2003 report, we stated that under the Public Printer’s direction, GPO also had taken several steps that recognize the important role strategic human capital management plays in its transformation. For example, GPO established and filled the position of Chief Human Capital Officer (CHCO), shifted the focus of existing training, expanded opportunities for more staff to attend needed training, and enhanced recruitment strategies. We also made numerous recommendations to GPO on the steps it should take to strengthen its human capital management in support of its transformation. These recommendations focused on the following four interrelated areas: ● communicating the role of managers in GPO’s transformation, ● strengthening the role of the human resources office, ● developing a strategic workforce plan to ensure GPO has the skills and knowledge it needs for the future, and ● using a strategic performance management system to drive change. GPO has taken or plans to take steps that address these recommendations. According to the CHCO, a performance element and standard is being added to all managers’ performance plans to address their role as communicators within GPO. Managers are now required to meet with their employees at a minimum of once a month with key information from these meetings communicated to the CHCO. In addition, according to the CHCO, the human resources office has been reorganized into teams responsible for a particular GPO division, serving as a “one-stop shop” for all of the divisions’ human resource needs. The intention is to fully integrate human capital management throughout the agency’s operational divisions. All human resources office employees will be trained as human resource generalists in the full range of human resources activities including change management, strategic human resource planning, position classification, recruitment and placement, benefits, performance management, career development, and labor/employee relations. Training will be provided by a combination of in-house talent and outside vendors to upgrade the skills of current human resources staff. Additionally, GPO has hired a Director of Workforce Development, Education, and Training to manage the expanding training program at GPO. The human resources office plans to survey GPO’s operational divisions regarding their level of satisfaction with the new human resources office. As a first step in GPO’s strategic workforce plan, GPO’s CHCO plans to conduct a skills assessment of its workforce within the next 6 months. GPO’s newly hired Director of Workforce Development, Education, and Training has met with GPO’s senior managers, union leaders, employees, and skills assessment consultants to determine the methodology that will be used for the skills assessment. The skills assessment will include a number of measurement tools and methods. Employees will be asked to participate in taking assessment inventories, skills tests, and electronic and paper-based surveys. While the skills assessments are being completed, GPO’s leadership plans to identify the critical skills and competencies that GPO will need for its transformation. As an interim effort, GPO is in the process of surveying its managers to identify skills that are lacking for large groups of employees. For example, GPO’s Chief Information Officer identified the need for staff to have enhanced project management skills, and the human resources office has worked to provide training to GPO staff to address this gap. Finally, GPO’s CHCO is initiating a pay for performance pilot program. The plan is to pilot the new system with Senior Level Service employees, and will offer three levels of bonus for employees who meet at least 80 percent of their goals. GPO officials have contacted other federal agencies to benchmark pay for performance systems, including us, and has examples of performance plans and goals from at least five federal agencies and from six business and educational institutions. While GPO has made progress on human capital initiatives, significant challenges remain. For example, the restructuring and creation of many new positions within GPO produces a great deal of work for the human resource office. Developing position descriptions, posting new job opportunities, and vetting applications—all the while being reorganized and trained to do new tasks—will stretch the human resource office. Although the human resource office’s culture is becoming more collaborative, program officials and human resource officials acknowledged that the cultural change is difficult and will take time. Given these challenges, continued top leadership commitment will be needed to reinforce and sustain the progress the human resource office is making to change its culture. Effective integration and alignment of GPO’s human capital approaches with its strategies for achieving mission and programmatic goals and results will be a key factor in successfully transforming GPO and sustaining high performance. As GPO moves forward to draft its strategic plan, it will have the opportunity to revisit its progress in human capital management and focus the human resource office’s priorities on areas that contribute most to accomplishing the goals and objectives in the strategic plan. Developing a strategic workforce plan that is linked to the strategic plan will undoubtedly be a key activity for GPO as it moves forward in the second year of its transformation. GPO is also taking steps to put greater emphasis on customer needs. Based on executive agencies’ responses to our surveys, we provided observations and suggestions for action to GPO. Specifically, we suggested that the agency consider ● working with executive branch agencies to examine the nature of their in-house printing and determine whether it could provide these services more economically; ● addressing the few areas in which executive branch agencies rated its products, services, and performance as below average, ● re-examining its marketing of electronic services to ensure that agencies are aware of them; and ● using the results of the surveys to work with agencies to establish processes that will ensure that eligible documents (whether printed or electronic) are forwarded to GPO for dissemination to the public, as required by law. GPO officials agreed with the issues identified by executive branch agencies and said they are already taking action to address them. According to these officials, GPO is ● taking a new direction with its Office of Sales and Marketing, including hiring an outside expert and establishing nine National Account Managers, who spend most of their time in the field building relationships with key customers, analyzing their business processes, identifying current and future needs, and offering solutions; ● working with its largest agency customer, the Department of Defense, to determine how to work more closely with large in-house printing operations; ● evaluating recommendations received from the Depository Library ● continuing to implement a Demonstration Print Procurement Project, jointly announced with the Office of Management and Budget on June 6, 2003. This project is to provide a Web-based system that will be a one-stop, integrated print ordering and invoicing system. The system is to allow agencies to order their own printing at reduced rates, with the option of buying additional printing procurement services from GPO. According to GPO, this project is also designed to address many of the issues identified through our executive branch surveys, particularly the depository library fugitive document problem. Such actions, although still in their early stages, should assist GPO in determining how to better serve its customers and address issues such as those involving fugitive documents. In summary, the new printing and dissemination environment at the beginning of the 21st century has created significant challenges for GPO. Agency leadership recognizes these challenges and has made a commitment to transform the agency to function effectively within this changed environment. As part of this effort, the Public Printer has taken an important step by establishing a strategic planning process, which, in part, will consider changes to the agency’s future mission and focus. Further, in realizing the importance of effective human capital management, he is establishing the foundation needed to successfully transform GPO. In addition, by placing new emphasis on its customers, the agency is focusing on a key characteristic of high-performing organizations. Fulfilling this commitment, however, will require sustained attention from GPO leadership as well as clear-sighted analysis of the challenges and the actions required in response. In the coming months, we plan to continue to work with these leaders cooperatively as they make further progress in their transformation. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the committee may have at this time. For further information, please contact Linda D. Koontz at (202) 512-6240 or by e-mail at [email protected]. Other key contributors to this testimony were Barbara Collier, Ben Crawford, Tonia Johnson, Steven Lozano, William Reinsberg, and Warren Smith. Senate Report 107-209 mandated that we perform a comprehensive review of the current state of printing and dissemination of government information and report on strategic options for GPO to enhance the efficiency, economy, and effectiveness of its printing and dissemination operations. In addition, the Chairman of the Legislative Branch Subcommittee, Senate Committee on Appropriations, requested us to carry out a general management review of GPO’s operations. As a result of our efforts on the mandate and request to date, we prepared interim briefings for the Legislative Branch Subcommittee, Senate Committee on Appropriations, which we presented to staff of this subcommittee on August 27, 2003, and April 1, 2004. To help explore GPO’s options for the future, we contracted with the National Academy of Sciences to convene a panel of experts to discuss (1) trends in printing, publishing, and dissemination and (2) the future role of GPO. In working with the National Academy to develop an agenda for the panel sessions, we consulted with key officials at GPO, representatives of library associations including the Association of Research Libraries and the American Library Association, and other subject matter experts. The National Academy assembled a panel of experts on printing and publishing technologies, information dissemination technologies, the printing industry, and trends in printing and dissemination. This panel met on December 8 and 9, 2003. To obtain information on GPO’s printing and dissemination activities—including revenues and costs—we collected and analyzed key documents and data, including laws and regulations, studies of GPO operations, prior audits, historical trends for printing volumes and prices, financial reports and data, and budget and appropriations data. We also interviewed appropriate officials from GPO, the Library of Congress, and the Office of Management and Budget. To determine how GPO collects and disseminates government information, we collected and analyzed documents and data on the depository libraries, the cataloging and indexing program, and the International Exchange Service program. We also interviewed appropriate officials from GPO. To determine executive branch agencies’ current reported printing expenditures, equipment inventories, and preferences; familiarity and level of satisfaction with services provided by GPO, and current methods for disseminating information to the public, we developed two surveys of GPO’s customers in the executive branch: We sent our first survey to executive agencies that are major users of GPO’s printing programs and services. It contained questions relating to the department’s or agency’s (1) familiarity with these programs and services and (2) level of satisfaction with the customer service function. These major users, according to GPO, account for the majority of printing done through GPO. This survey was sent to 11 departments that manage printing centrally, 15 component agencies within 3 departments that manage printing in a decentralized manner, and 7 independent agencies. A total of 33 departments and agencies were surveyed. The response rate for the user survey was 91 percent (30 of 33 departments and agencies). We sent our second survey to print officers who manage printing services for departments and agencies. These print officers act as liaisons to GPO and manage in-house printing operations. This survey contained questions concerning the department’s or agency’s (1) level of satisfaction with GPO’s procured printing and information dissemination functions; (2) printing preferences, equipment inventories, and expenditures; and (3) information dissemination processes. These agencies include those that were sent the user survey plus two others that do not use GPO services. We sent this survey to 11 departments that manage printing centrally, 15 component agencies within 3 departments that manage printing in a decentralized manner, and 9 independent agencies. A total of 35 departments and agencies were surveyed. The response rate for the print officer survey was 83 percent (29 of 35 departments and agencies). To develop these survey instruments, we researched executive agencies’ printing and dissemination issues with the assistance of GPO Customer Services and Organizational Assistance Offices. We used this research to develop a series of questions designed to obtain and aggregate the information that we needed to answer our objectives. After we developed the questions and created the two survey instruments, we shared them with GPO officials. We received feedback on the survey questions from a number of internal GPO organizations including Printing Procurement, Customer Services, Information Dissemination, and Organizational Assistance. We pretested the executive branch surveys with the Department of Transportation and the Environmental Protection Agency. We chose these agencies because each had a long-term relationship with GPO, experience with agency printing, and familiarity with governmentwide printing and dissemination issues. Finally, we reviewed customer lists to determine the appropriate sample size for the executive branch surveys. We did not independently verify agencies’ responses to the surveys. Our work on strategic human capital management is based on our October 2003 report on that topic. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Advances in technology have led to more organizations making information available over the Internet and the World Wide Web rather than through print, significantly changing the nature of printing and information dissemination. Government Printing Office (GPO) management recognizes that the new environment in which it operates requires that the agency modernize and transform itself and the way it does business. To assist in this transformation, GAO has been performing a comprehensive review of government printing and information dissemination and of GPO's operations. In this testimony, GAO summarizes the result of its work to date, for which GAO convened a panel of experts on printing and dissemination (assembled with the help of the National Academy of Sciences) to develop options for GPO to consider in its transformation, and surveyed executive branch customers regarding their practices and preferences for printing and dissemination, as well as on their interactions with GPO. The testimony reports on how changes in the technological environment are presenting challenges to GPO and on its progress in addressing actions that GAO's work indicates could advance its transformation effort. The changing technological environment is creating challenges for GPO. Specifically, the agency has seen declines in its printing volumes, printing revenues, and document sales. At the same time, more and more government documents are being created and downloaded electronically, many from its Web site (GPO Access). The agency's procured printing business, once selfsustaining, has experienced losses in 3 of the past 5 years, showing a net loss of $15.8 million. The sales program lost $77.1 million over the same period. In addition, these changes are creating challenges for GPO's longstanding structure for centralized printing and dissemination and its interactions with customer agencies. The Public Printer recognizes these challenges and in response has embarked upon an ambitious transformation effort. To assist in this effort, the panel of printing and dissemination experts GAO convened suggested that in its planning, GPO should focus on dissemination, rather than printing. The panel also provided specific options for it to consider as it transforms itself. GPO officials welcomed the options presented, commenting that the panel's suggestions dovetail well with their own assessments. In addition, these officials stated that they are using the results of the panel as a key part of the agency's ongoing strategic planning process. In addition, in October 2003, we reported that under the Public Printer's direction, GPO had taken several steps that recognize the important role that strategic human capital management plays in its transformation, including establishing and filling the position of Chief Human Capital Officer. At that time, we made numerous recommendations on the further actions it could take to strengthen its human capital management. In response, GPO is beginning to address these recommendations. For example, it has reorganized its human resources office into teams responsible for each of its divisions, serving as a "one-stop shop" for all of a division's human resource needs. It also plans to conduct a skills assessment of its workforce and is initiating a pay for performance pilot.
The number of women in the military has grown significantly in recent decades. Women now make up about 14 percent of active duty forces, up from about 2 percent in the early 1970s. Their role has also evolved from the traditional concentration in medical and administrative occupations; women are now eligible to serve in over 80 percent of all military jobs, including many air, sea, and other combat-related positions. The growing role of women has also resulted in debate within and outside of the Department of Defense (DOD) over fundamental and sometimes contentious issues, including whether physical fitness standards are fair and appropriate to both men and women. The Defense Advisory Committee on Women in the Services reported that men and women at military installations across the country are confused about the need for differing standards among the services, particularly those regulating body fat, and lack confidence in the fairness of the standards. In addition, the Rand Corporation recently reported that some military men believe that fitness standards have been adjusted to the point of being too easy for women. Physical fitness is a fundamentally important part of military life for all military personnel. DOD guidance requires that servicemembers pass physical fitness tests at least annually regardless of age and gender. Personnel who fail to meet fitness standards can be denied promotions, schooling, and other activities and may be forced to leave the military. In recent years, the downsizing of active duty forces and the increased rate of deployments and redeployments for peace operations and other activities have increased the physical demands on soldiers. DOD’s guidance, issued in 1981 and updated in 1995, requires that the services establish physical fitness and body fat programs, which include fitness requirements for all servicemembers. The program guidance states that individual servicemembers need to possess the cardiorespiratory endurance, muscular strength and endurance, and whole body flexibility to successfully perform in accordance with a service-specific mission and military specialty. However, the guidance does not identify requirements for specific activities or levels of difficulty. In addition, the guidance states that maintaining desirable body composition is an integral part of physical fitness, general health, and military appearance. The Assistant Secretary of Defense for Force Management Policy is responsible for oversight of the program and coordinating with the Assistant Secretary of Defense for Health Affairs, who is responsible for establishing a health promotion program to be implemented in conjunction with the fitness and body fat program. DOD guidance states that each service must develop its own program according to its particular needs, placing primary emphasis on maintaining general health and physical fitness. Evaluation of individual fitness is an integral component of the program. DOD Instruction 1308.3 sets out a number of key requirements for this evaluation, including the following: The services must use physical fitness tests of cardiovascular endurance, such as running a certain distance within a specified time limit, and muscular strength and endurance, such as sit-ups and push-ups. All servicemembers are to be tested regardless of age. Testing standards may be adjusted for age and must be adjusted for physiological differences between men and women. All servicemembers are to be formally tested for the record at least annually. Efficiency or fitness reports must include comments if the servicemember fails to meet physical fitness standards. DOD’s instruction also sets out body fat control policies and procedures. The instruction requires the services to use a two-tier screening process. If a servicemember exceeds the weight parameters for his or her height in a screening table or the member’s immediate commander determines that his or her appearance suggests an excess of body fat, then the servicemember’s percent of body fat is to be estimated. To standardize as much as possible, DOD requires the services to use similar validated circumferential equations for the prediction of body composition. The men’s equation involves measurements of the neck and waist or abdomen. The women’s equation requires measurement of the hips, waist, and neck, but allows for optional measurements of the abdomen and wrist, and/or forearm. For both the fitness and body fat components of the program, servicemembers who fail to perform successfully against the established standards are to be given at least 3 months to improve. Servicemembers who have not progressed during that time are to be referred to medical authorities for further evaluation. If servicemembers continue to fail over time, they are to be considered for administrative separation under service regulations. Two kinds of physical performance requirements are placed on members of the military: job-specific physical performance standards that are applicable to particular occupations and general physical fitness standards that are applicable to all members regardless of their occupation. The purpose of job-specific physical performance standards is to ensure that those personnel assigned to physically demanding jobs are capable of performing the requirements of those jobs. On the other hand, the primary purpose of general fitness standards is to maintain the overall health and conditioning of personnel. As such, these standards are not intended to specifically enhance the performance of a particular service mission or job. Section 543 of the Fiscal Year 1994 National Defense Authorization Act required the Secretary of Defense to prescribe physical performance standards for any occupation in which the Secretary determined that strength, endurance, and cardiovascular capacity was essential to the performance of duties. The act required that any such standards developed were to pertain to job activities that were commonly performed in that occupation, relevant to successful performance, and not based on gender. In other words, job-specific physical performance standards would identify the absolute minimum level needed for successful performance in those occupations. Anyone in that occupation, regardless of gender, would be required to meet the same standard. In 1996, we reported on the development and use of gender-neutral occupationally specific performance standards in the military. Neither the Navy nor the Marine Corps had adopted occupational strength standards. Although the Army categorized each enlisted occupational specialty into one of five categories based on physical demand, it discontinued testing recruits’ physical capabilities to perform such activities in 1990 and had previously used the results of that testing only for counseling recruits about serving in certain occupations. The Air Force had categorized each of its enlisted occupations into one of eight physical demand categories. It used a strength aptitude test administered to recruits to screen out those who would be likely to have difficulty performing physically demanding jobs, but it did not incorporate the strength test into the required annual fitness evaluation for personnel in those jobs. The DOD physical fitness program involves more than just periodic testing against standards. Passing an annual fitness test is not synonymous with maintaining a high level of health and physical fitness. The research literature provides a large body of information linking physical activity to health and a variety of recommendations for the amount and intensity of exercise needed to achieve fitness. For example, organizations such as the American College of Sports Medicine and the Department of Health and Human Services recommend 20 to 60 minutes of cardiovascular exercises most days of the week at a moderate level of intensity—for example over 50 percent of the maximum heart rate—as well as resistance exercises to condition the major muscle groups for strength and endurance. Some groups also recommend exercises to maintain flexibility. Although these recommendations were directed at the general U.S. population, a 1998 National Academy of Sciences report recommended that DOD personnel follow a similar regimen. DOD guidance recommends that servicemembers engage in regular physical fitness training of about 1-1/2 hours, three times a week. Duty time can be authorized for such training. Research literature also supports linking body fat percentages, cardiovascular endurance, and muscular endurance to the overall health objective. For example, the 1998 report by the National Academy of Sciences indicates that increases in the percentage of body fat are associated with health problems and a decrease in some aspects of fitness. Individuals with excess accumulation of abdominal fat appear to be at increased risk for a number of diseases. Research has identified little correlation between performance on timed runs, push-ups, sit-ups, and other fitness tests and specific military task performance. According to the 1998 National Academy of Sciences report, the majority of the military’s physically demanding occupations involve occasional to frequent lifting and load carrying. However, the report found little association between performance on push-up, sit-up, and unloaded distance running tests, and lifting and load carrying ability. Researchers concluded that tasks, such as unloaded distance running, were rarely a part of a soldier’s military duties and that the larger body type required to excel at lifting, for example, was different from the leaner body type required to excel at distance running. The relationship between the percentage of body fat and task performance is more complex. Some research has found that the higher the percentage of body fat the lower the performance in running tests. However, research also shows that women recruits who failed body fat standards were stronger than their counterparts who passed. This situation presents a dilemma for the military: setting a high body fat limit favors selection of women who are strong but may lack optimum endurance, and vice versa. The Academy’s report pointed out that, to some degree, current body fat standards may discriminate against women who would be the most capable of performing jobs requiring strength, which might be the most critical for survival in a combat situation. In addition, the 1998 report by the National Academy of Sciences, as well as an earlier report in 1992, concluded that the “appearance” objective does not seem to be linked to performance, fitness, nutrition, or health. Research conducted in 1990 explored this relationship by having a panel of military officers and enlisted personnel rate the military appearance of 1,075 male and 251 female Army personnel in uniform, and then comparing these judgments to measures of the percent of body fat for each participant. The results were only a “modest” correlation (0.53 for males, and 0.46 for females), and the report concluded that factors other than body composition, notably subjective judgment, influence appearance ratings. The National Academy of Sciences reports recommended that the military should develop objective criteria with which to judge appearance if it deems such a standard necessary. The Ranking Minority Member, Subcommittee on Readiness, Senate Committee on Armed Services, requested that we review a series of issues regarding the treatment of men and women in the military. This report discusses (1) the rationale for differences in difficulty among the military services physical fitness standards, (2) how the services adjust the standards for gender and age, and (3) DOD’s oversight of the fitness program. To assess the differences in the difficulty of fitness standards, adjustments to the standards for differences based on gender and age, and DOD oversight, we reviewed DOD directives and instructions, service regulations, manuals, and supporting documents; analyzed pertinent research and policy reports undertaken by DOD and a variety of independent civilian agencies; and discussed the results with officials and researchers from DOD, the military services, and the civilian agencies. We did not visit individual units to test the implementation of the guidance. To address the issues of DOD policies, service differences, and level of oversight, we interviewed officials from the Office of the Under Secretary of Defense for Personnel and Readiness, the Office of the Assistant Secretary of Defense for Force Management Policy, and the Defense Advisory Committee on Women in the Services. In the Army, we interviewed officials and researchers from the Assistant Secretary of the Army for Manpower and Reserve Affairs; the Office of the Deputy Chief of Staff for Personnel; the Army Physical Fitness School at Fort Benning, Georgia; and the Army Medical Research and Development Command at Fort Detrick, Maryland. We interviewed Navy personnel from the Bureau of Naval Personnel and the Naval Health Research Center in San Diego, California. We met with Marine Corps personnel from the Combat Development Command in Quantico, Virginia. To complete our work in the Air Force, we interviewed officials from the Office of the Surgeon General. To gain additional perspectives on physical fitness programs, we reviewed various research and evaluation reports and interviewed officials from a variety of government and civilian organizations. These organizations included the National Academy of Sciences; the National Institutes of Health; the Centers for Disease Control and Prevention; the President’s Council on Physical Fitness and Sports; the American Heart Association; and the Cooper Institute for Aerobics Research in Dallas, Texas. We conducted our review between January and September 1998 in accordance with generally accepted government auditing standards. Physical fitness programs enacted by the services are a mixture of different requirements, lacking a clear rationale for marked differences in difficulty. In some cases, differences were due simply to failures to follow stated DOD policy, while in others, differences were due largely to confusion over program objectives. The services differ significantly in the types of physical fitness tests they use and the minimum levels of performance required on those tests. These differences occur in all three testing areas—cardiovascular endurance, muscular strength and endurance, and body composition. However, all services adjust program standards for physiological differences between the sexes in all three testing areas, and for age in the case of cardiovascular and muscular strength and endurance standards. To simplify comparisons of the cardiovascular and muscular strength and endurance standards in the tables which follow, we used a baseline age of 45 for both men and women. As shown in table 2.1, fitness standards for cardiovascular endurance differ significantly by service in the type of test used and the minimum level of performance required. Standards for running activities varied in both the distance of the test run and the required maximum time for the run. For a 45-year-old servicemember, the Navy requires its men and women to run 1-1/2 miles within about 17 and 18 minutes, respectively; the Army requires a 2-mile run within about 19 and 24 minutes; and the Marine Corps requires a 3-mile run within 30 minutes for men and 33 minutes for women. The Air Force tests its personnel for cardiovascular endurance by measuring the body’s oxygen consumption while riding a stationary bicycle. The services differ in the types of tests used to assess muscle strength and endurance. The Air Force is the only service with no requirement for push-ups, sit-ups, or other tests of muscular strength and endurance. Air Force personnel acknowledged that the service is not in compliance with DOD’s policy requiring such testing but could provide no explanation. According to a 1997 study of the Air Force fitness program and DOD’s 1981 report on physical fitness in the military, muscular endurance exercises were included in the Air Force program as late as the early 1960s, but had been dropped by the early 1980s. The Air Force study, as well as a panel of experts, concluded that muscle strength and endurance training, such as sit-ups and bench and leg presses, should be added to the Air Force fitness program. According to Air Force officials, a plan to begin muscular strength and endurance testing in two phases during 1999-2000 has been endorsed by the Surgeon General’s Office and is being reviewed by the Air Staff. While the Army, Navy, and Marine Corps all use sit-ups to test muscular strength and endurance, the minimum number required to pass these tests varied significantly across the services. Once currently pending changes take effect, the minimum number of sit-ups required for a 45-year old man to complete within 2 minutes will be 32 in the Army, 45 in the Marine Corps, and 29 in the Navy. Both the Army and the Navy have a push-up requirement, but their standards also differed significantly. The Marine Corps uses pull-ups for men and flexed arm hang for women as its test of upper body strength and endurance. Table 2.2 shows the services’ minimum standards for muscular strength and endurance. The services each use a two-tier body composition test, as required by DOD guidance. The first tier involves an initial screening in which servicemembers are required to pass a visual inspection for appearance and/or be measured against weight-for-height tables adjusted for gender. Table 2.3 shows that each of the services uses different weight-for-height values. If this initial screen is failed, then the servicemember must have their percentage of body fat determined using measures of the circumference of various body sites plugged into service equations that estimate the percentage of body fat. The purpose of the body fat calculation is to ensure that personnel with extra weight due to muscle (not fat) are not unfairly required to leave the military. As shown in table 2.4, maximum allowable percentages of body fat vary considerably by service. The body fat percentage standards appear to bear little logical relationship to the weight-for-height values that are used as a body composition screening tool. For example, the maximum allowable Air Force weights are often higher than Army weights for a given height, although the Air Force has more stringent body fat percentage standards than those of the Army. DOD guidance states that the services should place primary emphasis on fitness programs that develop general health and physical fitness. However, they also state that the services should establish fitness requirements in accordance with their particular mission, incorporate job-specific standards into the programs, and implement body fat programs that enhance military appearance. Officials in all the services cited health and fitness as program objectives, but indicated the degree of emphasis on other objectives varied by service. Service officials told us that the inclusion of multiple objectives in the guidance created confusion regarding the main purpose of the program and that emphasis given to one or the other of these objectives differed by service, with the difficulty of the standards raised or lowered accordingly. The Navy and the Air Force focused mostly on health as the program objective. Consequently, they tended to have relatively less vigorous standards than the Army and the Marine Corps, who placed additional emphasis on fitness and appearance. For example, Navy officials told us that they saw health as the appropriate objective of fitness programs, and their standards are set with that in mind. According to these officials, their maximum body fat standard of 22 percent for men is set at the clinical definition of obesity established by a National Institutes of Health panel in 1985, since obesity is clearly related to health problems such as diabetes, hypertension, heart disease, and cancer. However, Navy officials stressed that the standard is an upper limit, and they encourage Navy personnel to remain well below this level. In comparison, according to Marine Corps regulations and officials, the Marine Corps relies on maximum physical fitness more than any other service. Accordingly, the Marine Corps established its male body fat standard at 18 percent, the lowest level of all the services. Despite the apparent confusion, none of the services based its general fitness standards on specific combat mission or job requirements. However, at one time the Marine Corps administered a physical readiness test of combat skills, such as simulated marches uphill at a rapid pace, rope climbing to resemble entering and leaving a hovering helicopter, and evacuation of a wounded comrade by sprinting 50 yards, lifting another Marine onto the shoulders, and returning to the starting point. That test has been discontinued as an evaluated test for individuals, but units such as the Marine Corps Officer Candidates School continue to conduct the test as a training tool. Marine Corps officials were unsure as to when and why the individual test was discontinued. Officials at the Army Physical Fitness School also told us that they have been studying development of a combat fitness test for infantry soldiers. The test could include exercises such as a 3-mile march carrying a 40-pound pack, a weapons qualification test, and an obstacle course. The specific tasks would be linked to a unit’s mission-essential task list. If these kinds of job-specific physical standards are developed, DOD guidance calls for them to be incorporated into the service’s physical fitness program. Such job-specific standards would then augment the general fitness standards for personnel in those specific occupations but would not supplant the requirement for periodic testing against the general fitness standards. Officials from only two services, the Army and the Marine Corps, cited “appearance” as one of their physical fitness program objectives. They indicated that image is an important aspect of effectiveness, and because the image of a soldier is one of leanness, an excessively fat appearance could weaken the military image and undermine effectiveness. Navy officials, told us that appearance is not an appropriate objective of body fat programs. However, Navy body fat results are used to determine an individual’s rating in the “military bearing” category on officer fitness reports and enlisted personnel evaluations. Although the references to additional objectives in the guidance has apparently led to some confusion, the Office of the Secretary of Defense official responsible for overseeing the fitness program stated that physical fitness standards are intended only to set a minimum level of general fitness and health for military personnel and are not directly related to job performance. This distinction between general fitness standards and job-specific physical performance standards was also set forth in a 1995 DOD report to the Congress on gender-neutral performance standards. In order to clarify the purpose of the physical fitness program, we recommend that the Secretary of Defense revise DOD’s regulations to (1) clearly state that the objective of the physical fitness program is to enhance general fitness and health and (2) make clear that the program is not intended to address the capability to perform specific jobs or missions. We also recommend that the Secretary of Defense take steps to ensure that all services implement testing in all three areas cited in the regulation—cardiovascular endurance, muscular strength and endurance, and body composition. DOD agreed with our recommendations and said that its joint service working group had reviewed DOD policy and the findings of the National Academy of Science’s 1998 report and determined that DOD’s policy should focus on general health and fitness. According to DOD’s response, preliminary actions are underway to revise policy documents to clarify that the objective of the program is to enhance general fitness and health, and to explain that the policy is not designed to address specific job or mission performance. DOD also agreed to require that all services test their personnel in cardiovascular endurance, muscular strength and endurance, and body composition. DOD further noted that these actions do not preclude it from establishing policies related to occupational or mission fitness needs, if such policies are needed. Service rationales for adjustments to the fitness standards were often different for men and women. This leads to questions about the fairness of standards applied to men and women. Some adjustments were not based on scientific data, and many were poorly documented. Efforts are underway to correct some of these problems and ensure that a consistent, science-based approach is used in setting standards for both genders. The approaches used to calculate the percentage of body fat are also inconsistent and outdated, further undermining the usefulness of the standards. Researchers found that service equations predict different body fat values when applied to the same woman, the subject population used to develop the equations is becoming increasingly less representative, and existing calculation approaches do not account for racial differences in bone density. The National Academy of Sciences has called for major changes to the program. In addition, DOD guidance states that all servicemembers, regardless of age, will be tested for cardiovascular and muscular endurance. However, the Navy and, until recently the Marine Corps, have exempted senior personnel—ages 50 and older and 46 and older, respectively—from such testing for years. The Air Force and the Army adhere to DOD’s policy to test servicemembers throughout their careers. The 1992 President’s Commission on the Assignment of Women in the Armed Forces looked closely at the issue of physical strength and endurance requirements. The Commission concluded that, since physical fitness standards are established to promote the highest level of general wellness in the armed forces and are not aimed at assessing capability to perform specific jobs or missions, it is appropriate to adjust the standards for physiological differences among service members. Although DOD policy allows adjustments to the fitness standards based on age and requires adjustments based on the physiological differences between genders, the approach to adjusting the actual standards is generally left up to each service. DOD’s current policy allows the services to set different minimums according to age to account for the physiological changes and diminished physical capabilities experienced as people age. However, DOD requires that all personnel, regardless of age, be tested against cardiovascular and muscular endurance standards at least annually. This policy dates back at least to the 1981 DOD report assessing military fitness programs. The report stated that exempting personnel from fitness testing at a certain age implied that fitness was not important after that point and diluted the involvement and support of senior leaders. Mandatory testing was viewed as a potential catalyst for change and more leader involvement and support of physical fitness. In contrast, some DOD personnel believe that requiring older personnel to meet fitness standards will result in the loss of senior leaders over time. Reports by the National Academy of Sciences and others indicate that, in addition to generally being smaller, female soldiers demonstrate only 50 to 70 percent of male’s strength, with the greatest disparity in the area of upper body strength. Women have smaller lung capacities and hearts than men. Women also carry about 10 percentage points more body fat than men and accumulate the fat in different places. As a result of these and other differences, women exerting the same effort as men in running, push-ups, and other cardiovascular and muscular strength and endurance tests are generally at a disadvantage. To reflect these and other gender-based physiological differences, DOD guidance directs that testing standards be adjusted. The guidance does not specify the degree of adjustment required in the case of cardiovascular and muscular strength and endurance standards. DOD guidance cites an acceptable body fat range of 18 to 26 percent for men and 26 to 36 percent for women. However, the guidance authorizes the services to establish more stringent standards based on service needs or mission but require an 8 to 10 percentage point difference (as is reflected in the DOD minimum and maximum allowable body fat percentages) between male and female body fat standards. The guidance also states that the services may not derive, extrapolate, or adjust female body fat standards using data from male subjects, and vice versa. DOD officials said that these body fat policies are intended to ensure that service standards are based on the results of objective, gender-specific scientific research. The officials also told us that the prohibition against inferring one gender’s standard from the other’s, while contradictory to the requirement for an 8 to 10 percentage point difference, is in place because simply inferring differences is not an adequate approach to setting standards. Some officials believe that the prohibition against inferring standards should apply to all physical fitness standards and not just the body fat standards. DOD officials could provide no explanation for why there is no comparable restriction on how the other female fitness standards are set. Each service established different standards for cardiovascular endurance by gender allowing female servicemembers more time to complete the same distance. The degree of gender difference varied by service. For example, in the case used in table 2.1, a 45-year-old woman is allowed 9 percent more time than a man in the Air Force, 10 percent more time in the Marine Corps, 11 percent more time in the Navy, and 27 percent more time in the Army. The three services that test muscular strength and endurance make gender-based adjustments to some standards in that area, but not others (see table 2.2). Only the Navy currently relaxes its sit-up requirements for women, allowing 45-year-old women to complete about 17 percent fewer sit-ups than their male counterparts. In 1997, the Marine Corps changed its sit-up standards to require identical performances from men and women. The Army is also expected to change to identical sit-up standards in January 1999. These changes are consistent with research indicating that women may equal or exceed male performance in sit-up tests. The Navy is currently conducting a study of fitness scores across the entire service, and officials expect the sit-up standards to also change once the results are analyzed. With regard to push-ups, both the Army and the Navy adjust the standards for gender differences—the female standard in the Army is 60 percent lower than the male standard, and the female standard in the Navy is 75 percent lower than the male standard. The degree of gender adjustment in the Marine Corps cannot be assessed, since it uses different tests for men (pull-ups) and women (flexed arm hang). One prevalent approach to determining appropriate differences in fitness standards is through the use of statistics on the distribution of actual performance scores. In this approach, the services analyze data on the actual performance of males and females within their own service in push-ups, sit-ups, running, and other fitness tests. Minimum and maximum standards may then be set at a particular percentile of performance. According to service researchers, this approach is modeled after the use of bell curves, indicating the performance of students relative to one another, to assign grades in the education sector. The rationale for current or pending female fitness standards, however, have been different from males’ in at least two of the military services. Male standards were usually based on actual data on their performance in the run, push-ups, or other such tests. However, female standards were often estimated, inferred from male data, or based on command judgment rather than actual performance in fitness tests. Also, the rationale for the standards was poorly documented in most services. Navy standards for the 1-1/2 mile run/walk, push-ups, and sit-up exercises for men and women 30 years old and above are based on the distribution of actual scores for Navy men and women identified in Navy research reports. According to Navy officials, minimum requirements are set at the 10th percentile and maximums at the 90th to 95th percentiles. However, 1-1/2 mile run standards for women under 30 years old were set by adding time to the men’s standards and not by using actual women’s run times. Effective September 1998, the maximum time allowed for women under 30 to complete the 1-1/2 mile run was lowered by as much as 1 minute 15 seconds. The new female standards were derived by multiplying the men’s standards by a factor to reflect the mean 18-percent difference between male and female aerobic capabilities, as calculated by Navy researchers, rather than using actual performance data. According to Navy documents and discussions with officials, this change was made because officials believed that the existing 4-minute difference between male and female standards in certain categories was not appropriate and that female standards needed to be more stringent. According to Navy officials, this change is temporary pending completion of an ongoing study of fitness scores throughout the Navy. The standards for males and for females ages 30 and older were not changed. Marine Corps officials believed that their male standards dated back to studies conducted in 1967 showing actual male times for the 3-mile run, with minimums set at the 10th percentile and maximums at the 90th. In January 1997, the Marine Corps raised the female run distance from 1-1/2 to 3 miles to match the male requirement. According to Marine Corps officials, studies conducted in 1993 and 1996 revealed an approximate 3-minute difference, on average, between the male and female run times. The resultant female standards were then established by adding the 3-minute average difference to the existing male standards. Marine Corps officials stated that, although the data needed to provide actual performance times was developed to ensure a solid basis for the new female standards, the process described above was used. A 1995 study by the Army concluded that its current physical fitness program contained gender disparities, with some women’s standards being less demanding than they should be, and not based on scientific research. For example, according to the report, research indicates that women’s world record times for events similar to the 2-mile run are 8 to 12 percent slower than men’s, but Army standards allow women to run 19 percent slower than men and still get the same score. Similarly, research found that women performed sit-ups at 95 to 110 percent of the male rate, but Army standards required women to perform at only 93 percent of the men’s standards. Officials at the Army Physical Fitness School could not fully document the rationale behind the standards. They believed that the minimum requirements were based on actual data collected in the early 1980s, but the incremental steps up to the maximum scores were based on simple numerical progressions, not actual performance data. For example, according to Army officials, the difference between the minimum and maximum requirement in the 2-mile run was set at exactly 4 minutes, regardless of gender or age group. Additional points above the minimum were awarded for every 6 seconds shaved off the minimum requirement. In the two youngest age groups, women’s requirements were exactly 3 minutes slower than men’s. Beginning in October 1998, the Army was scheduled to implement new standards based on a more scientifically based approach, with a gender neutral “equal points for equal effort” policy. The new minimum requirements are generally based on the 8th percentile of a sample of actual scores collected by the Army’s 1995 study, the maximums on performances at the 90th percentile, and both requirements are gradually reduced in 5-year increments as age increases. The new standards generally toughen the requirements for both sexes, requiring women to perform the same number of sit-ups as men, female run times to be set about 14 to 16 percent slower than male times, and female push-up requirements to increase from 44 to about 50 percent of the male standards. According to the Army study, these changes are consistent with a narrowing physical performance gap between the genders in recent years. The Army now plans to implement these new standards in January 1999. Air Force officials could provide no studies or other records to document the rationale for their cardiovascular endurance or body fat standards. However, according to Air Force officials, an oral history of the standards was developed through discussions with officers previously responsible for the program. According to the oral history, the cardiovascular standard was based on performance statistics from a population of Air Force men and women in the early 1990s. Researchers recommended that the minimum standard be set at the 20th percentile of performance because that was the point with the largest incremental gain in health benefits between percentile groups. However, Air Force officials wanted a higher standard for readiness reasons: as a result the next percentile grouping up, the 40th percentile, was selected as the minimum standard. Female standards were set the same way and at the same level. Experts indicate that it is appropriate to base gender-specific body fat standards on studies of the level of fat found in populations of physically fit men and women or on life insurance actuarial studies of the weights for heights associated with long life and good health. However, at least two services, the Army and the Navy, based their female body fat standards on different rationales than the male standards. Officials from DOD and the other services could not clearly document the basis for the standards. DOD’s original body fat standards were established in 1981 based on the recommendations of the study panel chartered to report on physical fitness in the military. According to the National Academy of Sciences’ 1998 report, the study panel recommended that both the male and female body fat standards be based on scientific texts indicating that the average body fat of physically fit young men was 20 percent and about 30 percent for fit young women, including a 5-percent margin for statistical error. DOD’s guidance incorporated the 20-percent goal for men but lowered the female goal to 26 percent. According to the Academy’s report, DOD decreased the female goal “in the belief that it was desirable to recruit women whose body fat was closer to that of the average man, as such women, possessing a higher than average proportion of fat free mass, might also be more similar to men in strength and endurance.” DOD’s original body fat standards were in effect until 1995, when they were changed to the current level of 18 to 26 percent for men and 26 to 36 percent for women. DOD officials had no documentation of the rationale for the change. However, service officials told us that the change was based simply on the desire to cover the full range of standards in effect in the services at the time and that no scientific research was conducted. Similarly, the weights listed in DOD’s screening tables for body fat (see table 2.3) are based on the National Institutes of Health 1985 definition of obesity, or 120 percent of certain weights-for-height identified in actuarial tables produced by the Metropolitan Life Insurance Company in 1983. However, we could find little agreement between DOD’s tables and the Metropolitan Life tables they are supposed to match. Until September 1998, Navy regulations based male and female body fat standards on different rationales. The male standard is based on the 1985 National Institutes of Health definition of obesity. Navy scientists converted the 1983 Metropolitan Life weight-for-height values into mean body fat percentages of about 22 percent for males and 33 percent for females, and recommended these percentages be adopted as maximum Navy body fat standards. The recommendation for males was adopted without change. However, according to discussions with Navy officials, command concerns about appearance resulted in lowering the female standard to 30 percent. The Navy revised its regulations in September 1998 to raise the female standard back to the 33 percent originally recommended. Marine Corps officials could not document a clear, scientific basis for either its male or female standards. However, based on our discussions with Marine Corps officials and review of regulations, the Marine Corps body fat standards appear to be based on command judgments regarding fitness and appearance, rather than health based actuarial studies or other scientific bases, although some limited research appears to have been considered. For example, Marine Corps regulations state that, more than any other service, the Marine Corps relies on the maximum fitness of its personnel. As a result, according to the regulation, the maximum allowable percentage of body fat for male Marines was set at 18 percent. This equates to just below the midpoint of the interval between the 10-percent body fat level said by the regulation to be exhibited by marathon runners and the 30-percent level said by the regulation to represent gross obesity. Similarly, the regulation sets the female standard at 26 percent, or about 80 percent of the way up the interval between the 11-percent body fat level said by the regulation to be exhibited by average gymnasts and the 30-percent level said by the regulation to represent gross obesity in women. The Army’s current body fat standards of 20 to 26 percent for men and 30 to 36 percent for women, according to research cited in the 1998 National Academy of Sciences report and our discussions with Army officials, are based on different rationales. The 20-percent male minimum is based on Army data on young male soldiers dating back to the 1980s. The 26-percent male maximum was a result of increasing the 20-percent minimum figure by 2 percentage points roughly for every 10 years of age to accommodate increases associated with aging. The Army’s current female standards were established in 1991. Prior to that year, the female standards were 28 to 34 percent, which Army officials told us were determined simply by adding 8 percentage points to the male minimum for each age category. The female standard was also viewed as unfairly restrictive compared with the men’s standard. For example, an Army study found that the standard provided young women only a 1-to-3 percentage point margin over the mean body fat for young female recruits, while the men’s standard provided a 4-to-6 percentage point margin over the mean for young male recruits. In 1991, the women’s standard was increased by 2 percentage points for each age grouping, raising it to the current level of 30 to 36 percent. Air Force officials could not determine the basis for their body fat standards. Consequently, they were also unable to tell us the basis for adjustments to the standards for gender. The basic approach used by each service to determine the percentage of body fat has been to first develop a set of measures of the circumference of various body sites, such as the waist and neck for men, and the neck, waist, and hips for women. Next, these measures are entered into gender-specific equations developed by each service to estimate the percentage of body fat. These equations were developed through analysis of population samples for relationships between measures of various body sites and the percentage of body fat, as validated against underwater weighing techniques. Researchers found, however, that this approach yields consistent results across the services for men, but not women. According to service researchers, men have basically one body type, whereas women have a variety of body types. The female body fat equations do not adjust well for the variety of female body types and thus do not consistently provide accurate predictions of the percentage of body fat. The three different body fat equations used by the services can result in different percentages of body fat when applied to the same woman. For example, a test we conducted found that the estimates for percentage of body fat for the same woman was 42 percent using the Army equation, 29 percent using the Navy and Air Force equations, and 27 percent using the Marine equation. The use of different equations producing such wide variation in estimates can result not only in inequities, but also in outcomes that are inconsistent with the intended objective. For example, even though the Marine Corps set its body fat standards at the most stringent level of any service, the equation it uses resulted in the lowest estimate of body fat of all the services. Researchers also report that the populations of active-duty soldiers used to validate the equations have, over time, become less representative of the ethnic and age diversity of the current military population. The Army’s female equation, for example, was validated largely on a Caucasian population because of problems in underwater weighing of African American and Hispanic subjects, many of whom withdrew from the testing because they could not swim. According to the National Academy of Sciences’ 1998 report, because the percentage of female and non-Caucasian soldiers is increasing, and the average age of female soldiers is also increasing, the subject population used to develop and validate the equations is becoming increasingly less representative. Table 3.1 shows the ethnicity of U.S. servicemembers as of the end of fiscal year 1997. The National Academy of Sciences’ 1998 report also concluded that the service equations are outdated because they fail to adjust for heavier bone densities in minorities. In the past, all services compared the results of their body fat equations with underwater weighing methods as a reference to check for accuracy and standardization. These techniques were based on so-called two-compartment models, which partition body weight into two basic components: fat and fat free mass (defined as the difference between body weight and fat mass). However, two-compartment models do not account for racial differences in bone density, thus potentially overstating the weight of minorities. In contrast, newer four-compartment models measure bone mass, total body water, body weight, and body volume, in part based on underwater weighing techniques. The Academy’s report concluded that agreement now exists that four compartment models have been developed over the past decade that are superior to the earlier two-compartment models. The Marine Corps was the first to base its equations on the newer four compartment models, beginning in October 1997. Navy researchers are currently developing equations based on four-compartment models for the remaining services. DOD guidance states that all servicemembers regardless of age will be tested for cardiovascular and muscular endurance. However, the Navy has exempted personnel age 50 and older, and the Marine Corps personnel age 46 and older, from such testing for years due to concerns about retaining senior leaders. In contrast, members of the Army and the Air Force are tested throughout their careers, in accordance with DOD policy. These inconsistencies can create significant inequities. For example, a 50-year-old, 70-inch tall Army male needed to weigh 192 pounds and complete sit-ups, push-ups, and a 2-mile run within specified timeframes to stay in that service. However, until recently, a Marine Corps male of the same age and height would have had to maintain a similar weight, but would not have to pass any cardiovascular endurance or strength tests to remain in the Marine Corps. The Marine Corps recognized that its fitness testing policy did not comply with DOD guidance and therefore changed the policy in July 1998, requiring that Marines of all ages pass tests in distance running, sit-ups, pull-ups, and the flexed arm hang. Navy officials believed that their fitness testing policy would also be changed, pending the results of an ongoing review. There is also disagreement over whether to relax body fat standards as servicemembers age. All services relaxed their cardiovascular and muscular endurance standards as service personnel age. However, the Navy and the Marine Corps did not carry this policy over to body fat standards. Older members of those services must meet the same body fat standards as the youngest members of their respective services. In contrast, the Army allows a 6-percentage point increase, and the Air Force a 4-percent increase, as their men and women age. This difference can be significant. For example, a 20-year-old female weighing 130 pounds would be allowed to gain about 8 pounds of fat by the age of 40 in the Army, while in the Navy and the Marine Corps no increase would be allowed. For a 20-year-old male weighing 200 pounds the difference would amount to about 12 pounds. DOD guidance allows relaxation of the cardiovascular and muscular endurance standards with age, but do not address this issue in the case of body fat standards. Army officials argued that it is realistic to reduce body fat standards as personnel age, but Navy officials argued that relaxing the standards implies that health is less important as men and women age. Researchers acknowledge that weight becomes progressively more difficult to maintain with age. There is a gradual loss of muscle mass as one ages, which may be replaced with fat over time. Nonetheless, consistent with a focus on good health, neither the 1998 National Academy of Sciences report nor the 1995 federal Dietary Guidelines for Americans found justification for allowing an increase in body weight with age. While some flexibility and discretion should be available to the services in setting their physical fitness policies, all of the services should follow clear and consistent policies and adjustments for age and gender should be scientifically based. Therefore, we recommend that the Secretary of Defense revise the physical fitness guidance to establish clear DOD-wide policy for age- and gender-based adjustments to general fitness and body fat standards, requiring all services to derive them scientifically, clearly document the basis used, and submit exceptions for approval and establish a DOD-wide approach, based on current scientific research, to estimating body fat percentages. We also recommend that the Secretary of Defense take steps to ensure that the services adhere to the policy requiring physical fitness testing of all servicemembers, regardless of age. DOD agreed with our recommendations. It said that it was already analyzing revisions to the standards: that considerations of age and gender will be required to be scientifically derived, with any exceptions to the policy submitted to the Secretary for approval; and that the services will be required to provide a statement in the annual fitness report that they are testing all military members, regardless of age. DOD also said that it has been working toward establishing a single approach to body fat measurement and that a change to establish a DOD-wide approach to estimating body fat will be included in the revised fitness and body fat policy to be completed by the end of 1999. Physical fitness oversight problems have persisted in DOD without resolution for a considerable period of time. Moreover, DOD has not enforced the annual reporting requirement or identified a common set of statistics needed to assess fitness. Consequently, it is unable to assess the effectiveness of the program. Comparisons of limited data we were able to obtain raised questions about program effectiveness. Failure rates among the services appear to be markedly different, with women failing at significantly higher rates than men. In addition, concern about the fitness of recruits and younger servicemembers is increasing. Problems, such as confusion over multiple fitness program objectives and failure to enforce key policy requirements, have persisted since at least the early 1980s. For example, Army research traces confusion between health and military performance objectives to the 1981 DOD Study of the Military Services Physical Fitness. This study acknowledged the benefits of designing programs with a health objective, but concluded that the goal of military physical fitness programs should be to make military personnel as fit for combat as possible. DOD’s 1981 Physical Fitness and Weight Control directive stated that physical fitness is essential to the general health of military personnel and that primary emphasis should be placed on programs that maintain physical fitness. However, the guidance also stated that ideally, physical training should be designed to develop physical skills needed in combat. Similarly, DOD’s requirement that all personnel, regardless of age, be tested for physical fitness is clearly spelled out in DOD’s 1981 directive, and DOD’s 1981 report on fitness in the military notes that the Navy and the Marine Corps were already exempting older personnel from fitness testing at that time. The requirements for each service to evaluate both cardiovascular and muscular endurance and provide annual reports that assess the program can be traced back at least to the 1995 version of the fitness guidance. However, the Air Force had stopped testing for muscular endurance by the early 1980s, and at the time of our fieldwork, none of the services had ever provided the required annual program reports. Officials from the Office of the Assistant Secretary of Defense for Force Management Policy said that they were aware of the problems with the physical fitness and body fat instructions and directives as well as noncompliance with DOD policies. According to these officials, a joint service working group has been examining these problems since the summer of 1996. However, the officials cited two factors delaying corrective action. First, there was little consensus among the working group on the usefulness of existing research for resolving DOD fitness policy issues. As a result, it was deemed prudent to wait until the National Academy of Sciences completed its study on body fat policies before revising DOD policy. Second, the office that monitors the services’ fitness programs has multiple responsibilities and frequent personnel turnover, and has no resident technical expert in exercise physiology, all of which limit the office’s capability to quickly resolve such complex issues. Similar problems, however, were identified in DOD’s 1981 report on fitness programs. For example, the report found that, compared with other programs, physical fitness received little emphasis or resource commitment in DOD, and there was a lack of fitness-related research and qualified professional leadership and personnel with professional degrees in physical fitness. The report provided a number of recommendations to improve DOD management of physical fitness, including one for the Office of the Secretary of Defense to establish a DOD Committee for Physical Fitness to provide coordinated and continuing review and evaluation of the services’ physical fitness programs and research. In 1985, DOD established a Joint Committee on Fitness to establish internal operating objectives for service fitness programs and function as a focal point for the exchange of policy, program, and research information. However, according to DOD officials, this committee stopped meeting and has been inactive for some time. These officials were unsure of the specific time or reasons the Committee stopped meeting. DOD officials told us that action to correct some of these problems has begun. For example, according to DOD, initial agreement has been reached to continue to study implementing one body fat equation for men and one for women across all services. Additional recommendations contained in the National Academy of Sciences’ report are still being reviewed, but drafting of policy revisions is planned for the fall of 1998. DOD officials also acknowledged that enforcement of the annual reporting requirement could have provided a useful monitoring mechanism. After our discussions, the Office of the Assistant Secretary of Defense informed the services by memorandum dated March 24, 1998, that they would now be required to provide the annual reports. By September 1998, all of the services had provided the initial reports. DOD and service officials also noted that the DOD fitness program could benefit from the reestablishment of a joint fitness committee at the Secretary of Defense level to help steer and accept policy recommendations. DOD has not defined the basic information needed to monitor the fitness of military service personnel. For example, information, such as the number of annual failures and the characteristics of those who fail, the results of remedial programs, and the number and characteristics of those who are separated each year for failure to meet fitness standards, are key to understanding the program. However, the services could not consistently provide this information. Similarly, the 1981 DOD report on fitness in the military also reported that the services could not accurately assess the fitness of their personnel and called for systems to be established to monitor and measure program effectiveness. DOD and the services maintain a variety of statistics to describe various aspects of the physical fitness programs. However, this information is difficult to compare across services and time periods to provide meaningful conclusions about the level of fitness in the military. Differences in comprehensiveness, in the way in which data is aggregated, or other problems create comparison problems. For example, according to officials, the Army does not maintain a servicewide data base on physical fitness test results. The responsibility for maintaining this information is decentralized to the unit level. Further, Navy officials told us that they do not separate their data by gender, so comparisons of male and female performance against the standards are not available. Other problems included unreliable information due to unit underreporting, results not separated to identify other key characteristics such as rank, or data on recent years not available due to system changes. As a result of these problems, we were unable to determine and compare fitness and body fat failure rates over time, separation rates due to repeated failures of the fitness standards, and other such key information. According to service officials, most fitness-related separations result from failure to achieve the body fat standards. For example, as shown in table 4.1, an average of about 4,600 enlisted personnel were separated during 1996 and 1997 for failing body fat standards. Data on officers was not consistently available. The number of personnel separated due to failures of the cardiovascular and muscular endurance standards was generally not available, but service officials believed that the number was relatively small. Although available data cannot be directly compared across services, our comparisons of limited available data raised questions about the effectiveness of the fitness programs. For example, data provided to us by the services indicates that failure rates in cardiovascular and muscular endurance tests are markedly different. A 1995 study by the Army Physical Fitness School found overall officer and enlisted failure rates at 12.5 percent. In comparison, failure rates in the Air Force totaled about 4.6 percent during 1997 and failure rates in the Marine Corps totaled about 1 percent, based on 1997 data. The reason for the large differences is unclear. For example, the Marine Corps appears to have the most difficult standards, but its failure rate appears to be the lowest. Available data on body fat failures showed somewhat less pronounced differences. For example, during 1997 nearly 5 percent of Army officers and enlisted personnel had their personnel records flagged for being overweight. In contrast, as of March 1998, about 2 percent of Air Force personnel were in weight management programs. Service data also indicated that women consistently fail the fitness standards at slightly higher rates than men. For example, the data cited above indicates that Army women failed the cardiovascular and muscular endurance standards at a 13-percent rate in 1995, while men failed at an 11-percent rate. Air Force data indicates that in 1997, women in that service failed in 9 percent of the cases, while men failed in 4 percent. Based on 1997 data, Marine women failed at a rate of 1.1 percent, while male Marines failed at a rate of 0.8 percent. Available data on the results of the body fat test was consistent with this trend. For example, Army data for 1997 showed that female Army personnel failed in about 6 percent of the cases, while Army men failed in about 5 percent of the cases. As of March 1998, about 4 percent of Air Force women were in weight management programs versus 2 percent of men. Officials also raised concerns about the lack of fitness of recruits and younger servicemembers in recent years. For example, the fitness of career soldiers was viewed as satisfactory, but the 1995 Army Physical Fitness School study found that 32 percent of women and 27 percent of men aged 17 to 21 failed the fitness test. By 1997, according to Army Physical Fitness School officials, a similar study found the failure rate was 55 percent of women and 38 percent of the men. Similarly, data provided by the Marine Corps showed that physical fitness test scores for incoming male and female recruits at one location were between 10 and 7 percentage points, respectively, lower in 1996 than in 1992. Officials in both services believed the trends were due to the increasing lack of fitness in our society. In the early 1960s, national health surveys found that about 24 percent of Americans ages 20 to 74 were overweight. However, according to a recent report by the National Institutes of Health, about 55 percent of the U.S. population is now considered overweight or obese. The reasons for the increase are unclear. Some have pointed to an increasingly sedentary lifestyle, with more focus on computers and electronic games, and less time spent exercising or playing sports. Others have pointed to social or cultural changes. Officials in both the Army and the Marine Corps, however, believe that training was able to improve the fitness of these personnel as they progressed through military life. Officials in the Navy and the Air Force were unsure whether the same problem was occurring in their services. We recommend that the Secretary of Defense revise the physical fitness guidance to establish a mechanism for providing policy and research coordination of the military services’ physical fitness and body fat programs and define the statistical information needed to monitor fitness trends and ensure program effectiveness, and require that this information be maintained by all services and provided in the currently required annual reports. DOD agreed with each recommendation. It said that the joint services working group provides the nucleus of a body of experts that can advise DOD policymakers on research and policy issues and that it is currently studying the best way to formalize the mechanism we called for. This mechanism, as well as the statistical information needed to monitor program trends and effectiveness, is to be included in the upcoming revision to DOD fitness policy.
Pursuant to a congressional request, GAO reviewed the military services' physical fitness and body fat standards to determine if: (1) differences exist among the military services in physical fitness standards and tests and the basis for any difference; (2) the services have a sound basis for adjusting the standards for gender and age; and (3) the Department of Defense (DOD) exercises adequate oversight of the fitness program. GAO noted that: (1) significant differences exist in the tests and standards that the military services use to measure physical fitness; (2) these differences reflect varying levels of difficulty in required performance in all testing areas--cardiovascular endurance, muscular strength and endurance, and percentage of allowable body fat--and occurred for different reasons; (3) specifically, services did not always adhere to DOD guidance for fitness testing or, in some cases, interpreted the guidance differently; (4) service officials stated that confusion over the program's objectives, stemming from conflicting statements in DOD's guidance, contributed to differences among the services; (5) adjustments to account for physiological differences by age and gender are, according to experts, appropriate for general fitness and health standards, and DOD guidance requires that gender-based adjustments be made; (6) although each of the services adjusts for gender, the degree of adjustment varies considerably; (7) inconsistent and sometimes arbitrary approaches to adjusting the standards have contributed to questions concerning the fairness of the standards applied to military men and women; (8) body fat standards are also questionable due to: (a) differences in each service's equations for estimating body fat, resulting in estimates ranging between 27 and 42 percent for the same woman; (b) outdated measurement approaches that did not account for racial differences in bone density; and (c) changes in ethnicity and other population characteristics of the current military that question whether the populations used to develop the equations represent the populations in today's military; (9) despite a clear requirement for all services to test all personnel regardless of age, the Navy and, until recently the Marine Corps, have exempted older personnel from fitness testing for years because of concerns about being able to retain senior leaders; (10) DOD's guidance and oversight of the service physical fitness programs are not adequate; (11) multiple program objectives and lack of DOD monitoring of service compliance with key policies, have persisted since at least the early 1980s without resolution; (12) DOD has not enforced annual reporting requirements or identified a common set of statistics to use in monitoring the services' fitness programs; (13) the statistics currently maintained by the services lack standardization; and (14) the limited data available raise questions about program effectiveness because failure rates appear to be markedly different among the services and women appear to fail at significantly higher rates than men.
DOE is responsible for a nationwide complex of facilities created during World War II and the Cold War to research, produce, and test nuclear weapons. Much of the complex is no longer in productive use, but contains vast quantities of nuclear and hazardous waste and other materials related to the production of nuclear material. Since the 1980s, DOE has been planning and carrying out activities around the complex to clean up, contain, safely store, and dispose of these materials. It is a daunting challenge, involving the development of complicated technologies, costing about $220 billion, and expecting to take 70 years or longer. DOE has reported completing its cleanup work at 74 of the 114 sites in the complex, but those were small and the least difficult to deal with. The sites remaining to be cleaned up present enormous challenges to DOE. DOE’s cleanup program is carried out primarily under two environmental laws: the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, and the Resource Conservation Recovery Act of 1976, as amended. Under section 120 of CERCLA, EPA must, where appropriate, evaluate hazardous waste sites at DOE’s facilities to determine whether the waste sites qualify for inclusion on the National Priorities List, EPA’s list of the nation’s most serious hazardous waste sites. For each facility listed on the National Priorities List, section 120(e) (2) of CERCLA requires DOE to enter into an interagency agreement with EPA for the completion of all necessary remedial actions at the facility. The interagency agreement must include, among other things, the selection of and schedule for the completion of the remedial action. Interagency agreements are revised, as necessary, to incorporate new information, adjust schedules, and address changing conditions. These agreements often include the affected states as parties to the agreements. These agreements may be known as Federal Facility Agreements or Tri-Party Agreements. Under amendments to RCRA contained in section 105 of the Federal Facility Compliance Act of 1992, DOE generally must develop site treatment plans for its mixed-waste sites. These plans are submitted for approval to states authorized by EPA to perform regulatory responsibilities for RCRA within their borders or to EPA if the state does not have the required authority. Upon approval of the treatment plans, the state or EPA must issue an order requiring compliance with the approved plan. The agreements are generally known as Federal Facility Compliance orders. DOE carries out its cleanup program through the Assistant Secretary for Environmental Management and in consultation with a variety of stakeholders. The Assistant Secretary directs DOE’s cleanup program at those sites under her direct control, including Hanford, Washington; Idaho Falls, Idaho; Savannah River, South Carolina; Rocky Flats, Colorado; and Fernald, Ohio; and is also responsible for the cleanup programs at other DOE sites, including Oak Ridge, Tennessee; the Nevada Test Site, Nevada; and Los Alamos National Laboratory, New Mexico. Many other stakeholders are involved in the cleanup. These include the federal EPA and state environmental agencies, county and local governmental agencies, citizen groups, advisory groups, Native American tribes, and other organizations. In most cases, DOE’s regulators are parties to the compliance agreements. Other stakeholders advocate their views through various public involvement processes including site-specific advisory boards. The 70 compliance agreements at DOE sites vary greatly but can be divided into three main types: (1) 29 are agreements specifically required by CERCLA to address cleanup of federal sites on EPA’s national priorities list of the nation’s worst hazardous waste sites or by RCRA to address the management of hazardous waste or mixed radioactive and hazardous waste at DOE facilities, (2) 6 are court-ordered agreements resulting from lawsuits initiated primarily by states, and (3) 35 are other agreements, including state administrative orders enforcing state hazardous waste management laws. All of DOE’s major sites have at least one compliance agreement in place, and many of these sites have all three types of agreements. Regardless of type, the agreements all contain enforceable milestones that DOE has agreed to meet. Collectively, as of December 2001, the 70 agreements had almost 7,200 schedule milestones. The milestones range from completing a report or obtaining a permit to finishing small cleanup actions or major cleanup projects. Table 1 shows, for each type of agreement, the number of sites and the number of schedule milestones they contain. See appendix I for a complete list of the 70 compliance agreements and information on the schedule milestones they contain. Agreements of the first type—those specifically required by CERCLA or by RCRA—are in effect at all of DOE’s major sites. They tend to cover a relatively large number of cleanup activities and have the majority of schedule milestones that DOE must meet. Even within this category of agreements, however, the number of milestones in a particular agreement varies widely. For example, the Tri-Party Agreement at the Hanford site, which implements both CERCLA and RCRA requirements, contains 951 milestones, and more milestones will be added in the future. The agreement addresses nearly all of the cleanup work and many administrative processes to be completed at the site over the next 70 years. At another site, the agreement implementing CERCLA requirements at DOE’s Brookhaven National Laboratory, New York, contains 63 milestones and more milestones will be added in the future. This agreement also addresses most of the cleanup activities that will occur at that site. Several factors can influence the number of milestones in an agreement, including the extent of environmental contamination and the preferences of the regulators. Agreements that implement court-ordered settlements exist at only a few DOE sites, tend to be focused on a specific issue or concern, and have fewer associated schedule milestones. These agreements are typically between DOE and states. The issues addressed by the agreements ranged from treating high-level waste so it could be disposed of outside the state to submitting permit applications for treating, storing, and disposing of hazardous wastes in specific locations. For example, at the Idaho National Engineering and Environmental Laboratory, a settlement agreement containing 33 milestones was signed that, among other things, established a schedule for removing used (“spent”) nuclear fuel from Idaho. The agreement was between DOE, the state of Idaho, and the U.S. Navy, because spent nuclear fuel from Navy ships is stored at the Idaho Falls site. The settlement agreement resolved a long-standing dispute between Idaho and DOE about shipping waste in and out of the state. The remaining agreements are based on either federal or state environmental laws and address a variety of purposes, such as cleaning up spills of hazardous waste or remediating groundwater contamination, and have a wide-ranging number of milestones. For example, an agreement at DOE’s Fernald, Ohio, site contains only four milestones and addresses neutralizing and removing hazardous waste at the site. In contrast, an agreement at the Nevada Test Site contains 464 milestones and addresses identifying locations of potential contamination and implementing corrective actions and also implementing specific sampling and monitoring requirements. DOE reported completing 4,558 of the 7,186 milestones in these agreements as of December 2001, about 80 percent of them by the time originally scheduled in the agreements. Many of the milestones completed either have been administrative, such as issuing a report, or have involved completing some kind of step in the cleanup process, such as conducting certain tests. Although such process steps may be important in arriving at eventual cleanup, it is unreliable to use them to judge how much has been accomplished in actually cleaning up the sites. When DOE misses a milestone, regulators have several options, including negotiating a new date or assessing a penalty. Thus far, regulators have generally been willing to negotiate extensions when DOE found itself unable to complete a milestone on time, approving about 93 percent of DOE’s requests for milestone changes. In 13 cases, regulators (generally the EPA) took enforcement actions for not meeting a milestone date. The 13 enforcement actions resulted in DOE making $1.8 million in monetary payments and about $4 million in other penalties (such as added work requirements), and were for problems such as delaying the construction of a mixed-waste laboratory or the selection of a method to remove and treat contamination from soil. At the sites we visited, regulators said that so far, they had been willing to take a collaborative and flexible approach to extending milestones. However, regulators said that they generally were unwilling to extend milestones just to accommodate lower funding levels by DOE. At one site, we found instances in which this concern had grown to the point that DOE decided to adhere to a sensitive milestone rather than to propose using a less expensive approach that would have taken longer. DOE reported completing about two-thirds of the 7,186 milestones contained in its compliance agreements as of December 2001. Of the 4,558 milestones completed, 3,639, or about 80 percent, were finished by the original due date for the milestone. The remainder of the completed milestones were finished either after the original due date had passed or on a renegotiated due date, but DOE reported that the regulators considered the milestones to be met. Currently, DOE has agreed to complete at least 2,400 additional milestones in the future. However, the actual number of milestones DOE will need to complete will likely be higher, as milestones will be added with changes in cleanup strategies or as new work is identified. Most of the milestones DOE must meet are contained in the compliance agreements at its six largest sites—Hanford, Savannah River, Idaho Falls, Rocky Flats, Oak Ridge, and Fernald. These six DOE sites are important because about two-thirds of DOE’s cleanup funding goes to them. These sites reported completing a total of 2,901 of their 4,262 milestones and met the original completion date for the milestones an average of 79 percent of the time. As table 2 shows, this percentage varied from a high of 95 percent at Rocky Flats to a low of 47 percent at Savannah River. Besides the 1,334 milestones currently yet to be completed, additional milestones will be added in the future. For several reasons DOE’s success in completing milestones on time is not a good measure of progress in cleaning up the weapons complex. Specifically: Many of the milestones do not indicate what cleanup work has been accomplished. For example, many milestones require completing an administrative requirement that may not indicate what, if any, actual cleanup work was performed. At DOE’s six largest sites, DOE officials reported that about 73 percent of the 2,901 schedule milestones completed were tied to administrative requirements, such as obtaining a permit or submitting a report. Some agreements do not have a fixed number of milestones, and additional milestones are added over time as work scope is more fully defined. For example, one of Idaho’s compliance agreements establishes milestones for remedial activities after a record of decision has been signed for a given work area. Four records of decision associated with the agreement have not yet been approved. Their approval will increase the number of enforceable milestones required under that agreement. Because the total number of milestones associated with those types of agreements is not yet known, DOE’s overall progress in accomplishing the cleanup is difficult to determine. Many of the remaining milestones are tied to DOE’s most expensive and challenging cleanup work, much of which still lies ahead. Approximately two-thirds of the estimated $220 billion cost of cleaning up DOE sites will be incurred after 2006. DOE has reported that the cleanup activities remaining to be done present enormous technical and management challenges, and considerable uncertainties exist over the final cost and time frame for completing the cleanup. Although schedule milestones are of questionable value as a measure of cleanup progress, the milestones do help regulators track DOE’s activities. Regulators at the four sites we visited said that the compliance agreements they oversee and the milestones associated with those agreements provide a way to bring DOE into compliance with existing environmental laws and regulations. They said the agreements also help to integrate the requirements that exist under various federal laws and allow regulators to track annual progress against DOE’s milestone commitments. Regulators have generally been flexible in agreeing with DOE to change milestone dates when the original milestone cannot be met. DOE received approval to change milestone deadlines in over 93 percent of the 1,413 requests made to regulators. Only 3 percent of DOE’s requests were denied. Regulators at the four sites we visited told us they prefer to be flexible with DOE on accomplishing an agreement’s cleanup goals. For example, they generally expressed willingness to work with DOE to extend milestone deadlines when a problem arises due to technology limitations or engineering problems. Because regulators have been so willing to adjust milestones, DOE officials reported missing a total of only 48 milestones, or about 1 percent of milestones that have been completed. Even in those few instances where DOE missed milestone deadlines and regulators were unwilling to negotiate revised dates, regulators have infrequently applied penalties available under the compliance agreements. DOE reported that regulators have taken enforcement actions only 13 times since 1988 when DOE failed to meet milestone deadlines. These enforcement actions resulted in DOE paying about $1.8 million in monetary penalties, as shown in table 3. In addition to or instead of regulators assessing monetary penalties, several DOE sites agreed to other arrangements valued at about $4 million. For example, for missing a milestone to open a transuranic waste storage facility at the Rocky Flats site, the site agreed to provide a $40,000 grant to a local emergency planning committee to support a chemical safety in schools program. At the Oak Ridge site, because of delays in operating a mixed waste incinerator, site officials agreed to move up the completion date for $1.4 million worth of cleanup work already scheduled. Also, at three sites—Paducah, Kentucky; Lawrence Livermore Main Site, California; and Nevada Test Site, Nevada—the regulators either did not impose penalties for missed milestones or the issue was still under discussion with DOE. While the consequences so far of not meeting schedule milestones have been few, regulators at individual sites may be less tolerant if DOE’s level of effort declines at their site. Regulators at the four sites we visited told us while they were willing to renegotiate milestone deadlines for technical uncertainties, they were far less inclined to be flexible if delays occurred because DOE did not provide the funding needed to accomplish work by the dates agreed to in the compliance agreements. The federal and state regulators told us that they prefer to be flexible and work with DOE to renegotiate milestone deadlines that will allow DOE to develop appropriate strategies to accomplish the work. However, these regulators also noted that a lack of funding was not a valid reason for DOE to avoid meeting its compliance requirements. At DOE’s Idaho Falls site, DOE has chosen not to pursue potentially less expensive ways to accomplish cleanup work in order to comply with a court-ordered milestone for shipment of wastes off-site. For example, DOE agreed with its regulators to characterize and prepare for shipment off-site about 15,000 barrels of untreated transuranic waste by December 31, 2002. This milestone is part of an agreement that also allows separate shipments of spent nuclear fuel from navy ships to be received at DOE’s Idaho Falls site. According to a February 1999 report by its inspector general, DOE could save about $66 million by deferring the processing and shipment off-site of the transuranic waste until a planned on-site treatment facility was completed, thus reducing the waste volume and cost to prepare it for shipment. But doing so would have caused DOE to miss the deadline set in the site’s compliance agreement to ship the waste from the state. Although missing the deadline carries no monetary penalties under this agreement, missing it would allow the state to suspend shipments of DOE spent nuclear fuel into the Idaho site for storage. To avoid this possibility, DOE decided not to wait until March 2003 or later, when DOE estimated a new treatment facility would be operational to prepare the waste for shipment at a substantially reduced cost. Instead, DOE chose to comply with the milestone to characterize, repackage, and ship the waste without treatment, even though it was the more expensive option. The president’s budget proposal for DOE, which is the version of the DOE budget submitted to the Congress, does not specifically identify the cost of complying with compliance agreements. DOE is not required to provide this information. As part of formulating their budget requests for DOE headquarters, individual DOE sites go through a process that includes developing compliance estimates. However, in the process that DOE headquarters uses to finalize the DOE-wide budget request, the site-level estimates become absorbed without specific identification into broader budget considerations that revolve around DOE-wide funding availability and other needs. As DOE headquarters officials adjust the budget amounts in the process of reconciling various competing funding needs, the final budget submitted to the Congress has, with few exceptions, no clear relationship to the amounts sites estimated were needed to fund compliance requirements. Even if it were possible to trace this relationship in the final budget, the figure would have limited significance, because sites’ compliance cost estimates are based primarily on the expected size of the budget. If the funding sites receive is insufficient to accomplish all of the compliance activities planned for that year, sites must decide which activities to defer to future years. If sites receive more funding than anticipated in a particular year, they have an opportunity to increase the amount of money spent on compliance requirements. The president’s budget submitted to the Congress does not provide information on the amount of funding requested for DOE’s compliance requirements. DOE sites prepare budget estimates that include compliance cost estimates and submit them for consideration by DOE headquarters. DOE headquarters officials evaluate individual site estimates and combine them into an overall DOE-wide budget, taking into account broader considerations and other priorities that DOE must address as part of the give and take of the budget process. The budget sent to the Congress has summary information on DOE’s programs and activities, but it provides no information on the portion of the budget needed to fund compliance requirements. DOE is not required to develop or present this information to the Congress. The president’s budget typically states that the DOE funding requested is sufficient to substantially comply with compliance agreements, but the total amount of funding needed for compliance is not developed or disclosed. Officials at DOE headquarters told us that they did not think information on funding to meet compliance requirements was needed in the president’s budget. They noted that budget guidance from the Office of Management and Budget does not require DOE to develop or present information on the cost of meeting compliance requirements, and they said doing so for the thousands of milestones DOE must meet would be unnecessarily burdensome. They said their approach has been to allocate funds appropriated by the Congress and make it the sites’ responsibility to use the funds in a way that meets the compliance agreement milestones established at the site level. Although DOE is not required to identify its compliance costs in the budget request that goes to the Congress, DOE does develop this information at the site level. This occurs because many of the compliance agreements require DOE to request sufficient funding each year to meet all of the requirements in the agreements. Also, DOE must respond to Executive Order 12088, which directs executive agencies to ensure that they request sufficient funds to comply with pollution control standards. Accordingly, each year DOE’s sites develop budget estimates that also identify the amount needed to meet compliance requirements. The sites’ process in developing these compliance estimates shows that a compliance estimate is a flexible number. DOE sites develop at least two budget estimates each year, and each estimate includes an amount identified as compliance requirements. Two budget estimates typically completed by the sites each year are the “full requirements” estimate and the “target” estimate. The full requirements estimate identifies how much money a site would need to accomplish its work in what site officials consider to be the most desirable fashion. The target estimate reflects a budget strategy based primarily on the amount of funding the site received the previous year and is considered a more realistic estimate of the funding a site can expect to receive. For each of these budget estimates, DOE sites also include an estimate of their compliance costs. As a result of this process, DOE sites usually have different estimates of their compliance costs for the same budget year. Table 4 shows how the compliance cost estimates related to compliance agreements changed under different budget scenarios. The multiple estimates of compliance costs developed by DOE sites indicate that DOE sites have alternative ways of achieving compliance in any given year. When we asked DOE officials to explain how the sites can have different estimates of the cost of meeting compliance requirements in the same year, they said that how much DOE plans to spend on compliance activities each year varies depending on the total amount of money available. Because many of the compliance milestones are due in the future, sites estimate how much compliance activity is needed each year to meet the future milestones. If sites anticipate that less money will be available, they must decide what compliance activities are critical for that year and defer work on some longer-term milestones to future years. On the other hand, if more money is available, sites have an opportunity to increase spending on compliance activities earlier than absolutely necessary. DOE is concerned that deferring activities that support milestones in future years may cause future milestones to be missed or renegotiated. In general, the sites’ target estimates and actual funding received have been below the sites’ full requirements estimates. DOE officials in headquarters and the sites we visited are concerned that recurring years of funding below the “full requirements” level could result in a growth of future funding needs that eventually may cause DOE to fail to meet milestone dates and/or require it to renegotiate the milestones. As an alternative to receiving more funding, DOE occasionally is able to identify operational efficiencies that accomplish the work for less money. DOE officials also acknowledged that DOE’s current initiative to reassess its overall cleanup approach may result in identifying alternative cleanup approaches that could eliminate the need to perform some of the future cleanup work that has been deferred. Compliance agreements are site-specific and are not intended as a way to manage environmental risks across DOE’s many sites. The agreements generally reflect cleanup priorities established by local stakeholders and set out a sequence for accomplishing the work. Risk is one factor considered in sequencing the cleanup work at the sites, but other factors such as demonstrating cleanup progress and reducing the overall cost of maintaining facilities are also considered. DOE has not developed a comprehensive, relative ranking of the risks that it faces across its sites; as a result, it has no systematic way to make decisions among sites based on risk. DOE has tried to develop such a methodology in the past but has been unsuccessful in doing so. Instead, DOE has provided relatively stable funding to its sites each year and generally allowed local stakeholders to determine their priorities for sequencing work at the sites. This approach may change: the department’s recently announced initiative to improve the performance of the environmental management program includes, as a key step, developing a risk-based cleanup strategy. DOE is currently evaluating how best to proceed in developing the risk-based strategy. DOE’s compliance agreements focus on environmental issues at specific sites. Because they are site-specific and do not include information on the risks being addressed, the agreements do not provide a means of prioritizing among sites and, therefore, do not provide a basis for decision- making across all DOE sites. For example, a compliance agreement at Savannah River focuses on achieving compliance with applicable CERCLA and RCRA requirements but does not specify the level of risks being addressed by specific cleanup activities. In developing the compliance agreements, risk is only one of several factors considered in setting agreement milestones. Other factors include the preferences and concerns of local stakeholders, business and technical risk, the cost associated with maintaining old facilities, and the desire to achieve demonstrable progress on cleanup. The schedules for when and in what sequence to perform the cleanup work reflect local DOE and stakeholder views on these and other factors. For example, Savannah River regulators told us that they were primarily concerned that DOE maintain a certain level of effort linked to the compliance agreement and they expected DOE to schedule this work to most efficiently clean up the site. DOE developed a decision model to determine how to allocate its cleanup dollars at Savannah River to achieve this efficiency. A group of outside reviewers assessing the system at the request of site management concluded that the model was so strongly weighted to efficiency that it was unlikely that serious risks to human health or the environment could alter the sequencing of work. DOE officials said they revised the model so that serious risks receive greater emphasis. In response to concerns expressed by the Congress and others about the effectiveness of the cleanup program, DOE has made several attempts to develop a national, risk-based approach to cleanup. As early as 1993, the Congress was urging DOE to develop a mechanism for establishing priorities among competing cleanup requirements. In 1995, we reported that unrealistic cleanup plans had impeded DOE’s progress and that DOE needed to adopt a national risk-based cleanup strategy. DOE’s efforts to do so occurred over several years. For example, In 1995, DOE developed risk data sheets as part of the budget development process. First used to develop the budget estimate for fiscal year 1998, the risk data sheets were used to assign scores based on such elements as public and worker health and environmental protection. The approach suffered from data limitations, poor definitions of the activities, inconsistent scoring of risk, and inadequate involvement with stakeholders. Finally, in 1997 DOE abandoned this effort. In 1997, DOE established risk classifications as part of its project baseline summaries. The project baseline summaries contained a component that addressed each project’s environmental risk. However, DOE did not have a clear basis for classifying risks, and the effort was not implemented consistently or generally accepted by DOE field staff. After 1998, this information was no longer developed. In 1999, DOE pilot tested the use of site risk profiles at 10 DOE offices. The profiles were intended to provide risk information about the sites, make effective use of existing data at the sites, and incorporate stakeholder input. However, reviewers found that the site profiles failed to adequately address environmental or worker risks because the risks were not consistently or adequately documented. In 2001, DOE eliminated a support group responsible for assisting the sites with this effort, and the risk profiles are generally no longer being developed or used. A 1999 DOE-funded study to evaluate its efforts to establish greater use of risk-based decision making concluded that none of the attempts had been successful. Common problems identified by the study included poor documentation of risks and inconsistent scoring of risks between sites. The study reported that factors contributing to the failure of these efforts included a lack of consistent vision about how to use risk to establish work priorities, the lack of confidence in the results by DOE personnel, the unacceptability of the approaches to stakeholders at the sites, and DOE’s overall failure to integrate any of the approaches into the decision- making process. However, the study concluded that the use of risk as a criterion for cleanup decision-making across DOE’s sites was not only essential, but was feasible and practical, given an appropriate level of commitment and effort by DOE. Without a national, risk-based approach to cleanup in place, DOE’s budget strategy has been to provide stable funding for individual sites and let the sites determine what they needed most to accomplish. For example, over the last 5 years, funding for Savannah River has ranged from $1.1 billion to $1.2 billion and Rocky Flats received from $621 million to $665 million. DOE’s Associate Deputy Assistant Secretary for Policy, Planning, and Budget told us that this approach allowed sites to allocate their funding based on their site-specific risk, compliance, and closure objectives. DOE plans to shift its cleanup program to place greater focus on rapid reduction of environmental risk. In February 2002, DOE released a report describing numerous problems with the environmental management program and recommending a number of corrective actions. The report concluded that, among other things, the cleanup program was not based on a comprehensive, coherent, technically supported risk prioritization; it was not focused on accelerating risk reduction; and it was not addressing the challenges of uncontrolled cost and schedule growth. The report recommended that DOE, in consultation with its regulators, move to a national strategy for cleanup. In addition, the report noted that the compliance agreements have failed to achieve the expected risk reduction and have sometimes not focused on the highest risk. The report recommended that DOE develop specific proposals and present them to the states and EPA with accelerated risk reduction as the goal. DOE’s new initiative provides additional funds for cleanup reform and is designed to serve as an incentive to sites and regulators to identify accelerated risk reduction and cleanup approaches. DOE’s fiscal year 2003 budget request includes a request for $800 million for this purpose. Moreover, the Administration has agreed to support up to an additional $300 million if needed for cleanup reforms. The set-aside would come from a reduction in individual site funding levels and an increase in the overall funding level for the cleanup program. The money would be made available to sites that reach agreements with federal and state regulators on accelerated cleanup approaches. Sites that do not develop accelerated programs would not be eligible for the funds. As a result, sites that do not participate could receive less funding than in past years. One initial response has been at Hanford, where DOE and the regulators signed a letter of intent in March 2002 to accelerate cleanup at the site by 35 years or more. DOE and the regulators agreed to consider the greatest risks first as a principle in setting cleanup priorities. They also agreed to consider, as targets of opportunity for accelerated risk reduction, 42 potential areas identified in a recent study at the site. While accelerating the cleanup may hold promise, Hanford officials acknowledged that much technical, regulatory, and operational work is required to actually implement the proposals in the new approach. DOE is proceeding with the selection and approval of accelerated programs at the sites, as well as identifying the funding for those accelerated programs. At the same time, DOE is considering how to best develop a risk-based cleanup strategy. DOE’s Assistant Secretary for Environmental Management said that in developing the risk-based approach, DOE should use available technical information, existing reports, DOE’s own knowledge, and common sense to make risk-based decisions. Because DOE’s approach to risk assessment is under development, it is unclear how effective it will be or whether in implementing it, DOE will be able to overcome the barriers encountered during past efforts to formalize a risk-assessment process. In the interim, DOE headquarters review teams were evaluating the activities at each site and were qualitatively incorporating risk into those evaluations. Compliance agreements have not been a barrier to previous DOE management improvements, but it is not clear if the agreements will be used to oppose proposed changes stemming from the February 2002 initiative. In the past, DOE has tried other management initiatives, within the framework of the compliance agreements. These initiatives generally have not involved significant changes in cleanup approach or the potential for significant reductions in funding at individual sites. We found no evidence that the compliance agreements were a barrier to implementing such initiatives or were a factor in their success or failure. Instead, the agreements have been used primarily to hold DOE accountable, through enforceable milestones, for cleaning up environmental hazards using whatever management strategy DOE employed to do so. The outcome could be different if regulators at individual sites perceive DOE’s latest initiative as an attempt to reduce the level of cleanup activity at the sites. Although DOE generally did not involve regulators in developing its February 2002 initiative to implement faster, risk-based cleanup of its sites, based on our discussions with regulators at several sites, it is unlikely that the compliance agreements would be a barrier to the initiative, as long as DOE’s approach is consistent with environmental laws and results in no reduction in funding at individual sites. However, the discussions indicated that DOE could encounter opposition if its realignment of cleanup priorities results in a site’s receiving significantly less funding and therefore accomplishing considerably less work than called for in the agreement. Parties to the compliance agreements indicated that if this occurs, they may not be willing to negotiate with DOE to extend schedule milestones further. In addition, it is unclear if regulators will use the compliance agreements to resist other aspects of DOE’s initiative, such as reclassifying waste to different risk categories in order to increase disposal options. DOE has implemented or tried to implement a number of management initiatives in recent years to improve its performance and address uncontrolled cost and schedule growth. For example, in 1994 it launched its contract reform initiative, in 1995 it established its privatization initiative, and in 1998 it implemented its accelerated path-to-closure initiative. These initiatives affected how DOE approached the cleanup work, the relationship DOE had with its contractors, and in some cases the schedule for completing the work. Based on reviewing past evaluations of these initiatives and discussions with DOE officials, it appears that DOE proceeded with these initiatives without significant resistance or constraints as a result of the compliance agreements. For example: DOE’s contract reform initiative involved a number of separate efforts, including greater use of fixed-price contracts and performance-based contracts, and a shift to greater use of management and integrating contracts that encourage using a greater number of specialized contractors and an integrating contractor to coordinate the various activities. DOE has implemented these reforms at many of its sites, including all of its large cleanup sites. Although the overall result of DOE’s contract reform initiative is difficult to measure, the various contracting reforms occurred within the framework of the existing cleanup approaches reflected in the compliance agreements in effect at those sites. DOE’s privatization initiative was intended to reduce the cost of cleanup by attracting “best in class” contractors with fixed-price contracts that required contractors to design, finance, build, own, and operate treatment facilities and to receive payments only for successfully treating DOE’s wastes. Although this approach required substantially different contracting and financing arrangements and there was considerable uncertainty about its eventual success, DOE implemented privatization projects at a number of its major sites, even though doing so sometimes required delaying or renegotiating near-term milestones in the compliance agreements. For example, to implement a privatization contract for the Hanford tank waste project, DOE renegotiated several milestones with its regulators. The state of Washington and EPA eventually agreed to the changes, even though they had concerns about DOE’s approach. This privatization project failed a few years later, stemming primarily from significant cost growth, poor contractor performance, and inadequate DOE management. DOE’s path-to-closure initiative was aimed at developing more efficient ways to conduct cleanup and, as a result, accelerate cleanup and closure of DOE sites. DOE’s goal was to clean up 41 of its 53 remaining contaminated sites by 2006. It proceeded to establish new cleanup and closure goals at many of its sites within the framework of the existing compliance agreements. For example, the planned closure of the Rocky Flats site was changed from 2010 to 2006 through a revision of the project baseline and award of a new closure contract. State of Colorado and EPA regulators supported those changes, even though they were not consistent with milestone dates in the site agreement. Regulators at the DOE sites we visited acknowledged that compliance agreements have not been a barrier to DOE’s management improvement initiatives. They said that although the agreements hold DOE accountable for its cleanup responsibilities, the agreements do not prescribe how DOE should manage its program. Several milestones in the compliance agreements have been renegotiated because DOE wanted to incorporate changes in its management approach with a resulting effect on specific projects. For example, DOE’s spent nuclear fuel project at Hanford is an effort to stabilize about 2,100 metric tons of highly radioactive spent fuel stored in aging basins and move the stabilized fuel farther from the Columbia River. Regulators agreed to revised interim milestones for the work after DOE proposed changes that would save money and reduce the risk of radiation exposure to workers. DOE’s management reform initiative is in the early stages and site-specific strategies are only beginning to emerge. DOE has begun discussions with officials in several states to implement this accelerated initiative. However, because DOE’s cleanup reform initiative is in its early stages, it is unclear how the site compliance agreements will affect implementation of DOE’s latest cleanup reforms. For example, it is not yet known how many sites will participate in DOE’s initiative and how many other sites will encounter cleanup delays because of reduced funding. Parties to the agreements at the sites we visited were supportive of DOE efforts to improve management of the cleanup program, but expressed some concerns about proposals stemming from the February 2002 review of the program. They said that DOE’s efforts to accelerate cleanup and focus attention on the more serious environmental risks are welcomed and encouraged because such initiatives are consistent with the regulators’ overall goals of reducing risks to human health and the environment. Most regulators added, however, that DOE generally had not consulted with them in developing its reform initiative and the regulators were concerned about being excluded from the process. They also said that DOE’s initiative lacked specifics and that they had numerous questions about the criteria DOE will use to select sites and the process DOE will follow at those sites to develop an implementation plan to accelerate cleanup and modify cleanup approaches. Most regulators said they would not view as favorable any attempt by DOE to avoid appropriate waste treatment activities or significantly delay treatment by reducing funding available to sites. In such a case, these regulators are likely to oppose DOE’s initiative. They told us that they most likely would not be willing to renegotiate milestones in the compliance agreements if doing so would lead to delays in the cleanup program at their sites. In addition, these regulators said that if DOE misses the milestones after reducing the funding at individual sites, they would enforce the milestones in the compliance agreements. The effect of compliance agreements on other aspects of DOE’s initiative, especially its proposal to reclassify waste into different risk categories to increase disposal options, is also unclear. Some of the proposed changes in waste treatment, such as eliminating the need to vitrify at least 75 percent of the high-level waste, which could result in disposing of more of the waste at DOE sites, would signal major changes in DOE assumptions about acceptable waste treatment and disposal options. For example, DOE is considering the possibility of reclassifying much of its high-level waste as low-level mixed waste or transuranic waste based on the risk attributable to its actual composition. Most of the high-level waste is located at DOE’s Hanford site. In addition, DOE officials at Hanford are considering relaxing the requirement to transport a portion of its transuranic waste to New Mexico, allowing instead for disposal on-site. While these options could reduce treatment and disposal costs and time frames, DOE would need to obtain regulatory and stakeholder agreement to alter key commitments. These types of changes in treatment approach would require modifications to current compliance agreements. It is unclear whether DOE’s regulators will be supportive of these changes. At Hanford, the regulators have agreed to discuss these types of changes in cleanup strategy. However, at all four sites we visited, regulators said that, although they supported DOE efforts to improve its operations, they also wanted DOE to meet its compliance commitments. The regulators commented that it is unclear how DOE’s proposed initiatives will be implemented, what technologies will be considered, and whether the changes will result in reduced cost and accelerated cleanup while adequately protecting human health and the environment. DOE generally did not seek input from site regulators or other stakeholders when developing its latest initiative. DOE’s review team leader said that at the time the review team visited individual sites, the team had not formulated its conclusions or recommendations and so did not seek regulator input. Furthermore, the team leader said that, during the review, internal discussions were being held within DOE about improving ineffective cleanup processes, such as contracting procedures. To include regulators on the review team during these discussions, according to the team leader, could have created the impression that the criticism of DOE processes was regulator driven rather than reflecting the views of DOE and contractor staff. According to the Associate Deputy Assistant Secretary for Planning and Budget, since the proposals coming from the review team were made public in February, DOE has held discussions with regulators at all sites and headquarters about implementing the proposals. DOE carries out its cleanup program in a complex legal and regulatory environment. Compliance agreements are one mechanism used to organize these legal and regulatory requirements and set priorities for cleanup at specific sites. As such, the agreements are not a useful tool, nor were they intended to be, for managing DOE’s cleanup program from a national, system-wide perspective. It is unclear if compliance agreements will be a potential barrier to DOE’s current national cleanup reform initiative. This initiative involves placing a greater focus on rapidly reducing environmental risks and, as a result, restructuring how DOE allocates its funding for cleanup across its sites. In some cases DOE is also considering dramatically different cleanup approaches than regulators and other stakeholders have come to expect. DOE’s compliance agreements could be a potential barrier to these changes, particularly at those sites where funding may be reduced as a result of implementing the new initiatives or where a significantly different approach is being proposed. DOE faces two main challenges in going forward with its initiative. The first is following through on its plan to develop and implement a risk- based method to prioritize its various cleanup activities. Given past failed attempts to implement a risk-based approach to cleanup, management leadership and resolve will be needed to overcome the barriers encountered in past attempts. The second challenge for DOE is following through on its plan to involve regulators in site implementation plans. DOE generally did not involve states and regulatory agencies in the development of its management improvement initiative. Regulators have expressed concerns about the lack of specifics in the initiative, how implementation plans will be developed at individual sites, and about proposals that may delay or significantly alter cleanup strategies. Addressing both of these challenges will be important to better ensure that DOE’s latest management improvement initiative will achieve the desired results of accelerating risk reduction and reducing cleanup costs. We provided a copy of our draft report to the Department of Energy for review and comment. DOE’s Assistant Secretary for Environmental Management responded that our draft report accurately presented information on the current status of compliance agreements, and generally agreed with the findings of the report. In addition, DOE provided technical clarifications and corrections to our report, which we incorporated as appropriate. We performed our review from July 2001 through May 2002 in accordance with generally accepted government auditing standards. This appendix presents information provided by DOE and from a questionnaire we administered to each of the operations offices that have sites with compliance agreements. The agreements are categorized into three types: “1” indicates agreements specifically required by section 120(e) (2) of CERCLA or by RCRA (as amended by section 105 of the Federal Facility Compliance Act of 1992), “2” indicates court-ordered agreements resulting from lawsuits, “3” indicates all other agreements. We defined a “compliance agreement” as a legally enforceable agreement between DOE and another party or parties that contained enforceable milestones defining cleanup activities that DOE must achieve by specified or ascertainable dates and that are funded by DOE’s EM program. DOE; New Mexico Environment Department; University of California dispose of covered mixed wastes at the laboratory (incorporates Site Treatment Plan) Number of milestones completed on original date 5 If an agreement was signed on multiple dates by the various parties, the latest date was used in this appendix. This agreement has been completed. Site was transferred to DOE’s Office of Science and receives no further environmental management funding. To determine the types of compliance agreements and what progress DOE is making in meeting milestone commitments, we administered a questionnaire to all DOE sites with compliance agreements funded by DOE’s Environmental Management program. We defined a “compliance agreement” as a legally enforceable agreement between DOE and another party or parties that contained enforceable milestones defining cleanup activities that DOE must achieve by specified or ascertainable dates and that are funded by DOE’s EM program. To determine the universe of compliance agreements, we obtained a list of all EM-funded compliance agreements from DOE. We also compared this list to EM sites listed on the EPA’s National Priorities List, which are required to have compliance agreements implementing CERCLA requirements. We discussed these agreements with staff from the DOE Chief Counsel’s office and the EM program to validate both the number of sites with agreements and the number of agreements. We removed from our study any agreement that did not contain enforceable milestones that DOE was required to meet. In addition, we did not include RCRA permits in our universe because (1) the great majority of DOE’s cleanup work is covered under compliance agreements and very little of that work is required under RCRA permits and (2) cleanup activities required as a condition of RCRA permits are generally also included in compliance agreements at DOE sites. Some of the compliance agreements we identified had been subsequently amended or replaced by other agreements. We included in the universe milestones from the original agreements if they were unique to those agreements and not repeated in the subsequent agreements. In addition, five of the agreements are no longer active because all the milestones associated with the agreements had been completed. We included those agreements and their milestones in our study. For each DOE site having one or more compliance agreement, we requested, for each agreement, information on the type of agreement, the scope of cleanup activities covered by the agreement, and information on the schedule milestones in the agreement. We also asked officials at each site to verify that the list of compliance agreements for their site was complete. We did not independently verify the accuracy of the information provided by each DOE site, but at the four sites we visited, we selectively tested the reasonableness of the information by reviewing site records and discussing compliance agreements with DOE officials. At some sites, DOE officials were unable to provide exact numbers, especially concerning the number of milestone dates that had been changed and the number of milestones that would be completed in the future. In these cases, DOE officials said the information provided represented their best estimates. To determine the extent that compliance with DOE agreements is reflected in the DOE budget submitted to the Congress, we reviewed numerous budget formulation documents at DOE sites and at DOE headquarters, budgeting guidance and standards, and we analyzed information from DOE’s integrated planning and budgeting system. We visited four of DOE’s largest environmental management program offices—the Richland, Idaho, Oak Ridge, and Savannah River operations offices—to document how these offices include compliance agreement requirements in their budget submittals to DOE headquarters. Although sites develop estimates of the compliance costs associated with compliance agreements as well as federal, state, or local environmental laws and regulations, in this report, compliance costs are limited to those costs associated with DOE’s compliance agreements. To determine how DOE headquarters uses site budget submittals, including compliance requirements, in its final budget submittal to the Congress, we reviewed budget documentation and interviewed officials at DOE’s headquarters office of the environmental management program and the Office of Management and Budget. To identify whether compliance agreements could be used to prioritize cleanup work across DOE sites, we reviewed the compliance agreements, interviewed DOE headquarters and site staff involved in the EM program to determine how environmental risks are considered in carrying out the cleanup program, and discussed the agreements with federal and state regulators at the four sites we visited. We also reviewed various studies and reports prepared by DOE and other organizations that discussed risk- based decision-making in the EM program. To assess the implications of compliance agreements on DOE’s initiatives to improve its EM program, we discussed the initiatives with DOE managers and staff in headquarters, including the leader of DOE’s February 2002 report, A Review of the Environmental Management Program. We also reviewed the proposal coming out of that report and discussed it with staff at the four field offices we visited as well as the regulators we interviewed about those sites. In addition, we reviewed other related documents and reports as well as reports issued by us and others on past EM management reform initiatives attempted or implemented by DOE. In addition to the person named above Chris Abraham, Doreen Feldman, Rachel Hesselink, Rich Johnson, Nancy Kintner-Meyer, Tom Perry, Ilene Pollack, Laura Shumway, and Stan Stenersen made key contributions to this report.
The Department of Energy (DOE) spends between $6 billion and $7 billion annually to store, clean up, and monitor nuclear and hazardous waste at its sites. Various federal and state agencies with jurisdiction over environmental and health issues related to the cleanup are therefore involved in regulating and overseeing DOE's activities. Much of the cleanup activity has been implemented under compliance agreements between the DOE and these agencies. There are three types of compliance agreements governing DOE's sites: (1) legal requirements that address the cleanup of federal sites on the National Priorities List of the nation's most serious hazardous waste sites or that address treatment and storage of mixed hazardous and radioactive waste at DOE facilities; (2) court-ordered agreements resulting from lawsuits initiated primarily by states; and (3) other agreements, such as state administrative orders enforcing state hazardous waste management laws, that do not fall into the first two categories. Through the end of fiscal year 2001, DOE had completed 4,500 milestones, although for several reasons, the number of milestones is not a good indication of cleanup progress. Many of the milestones are administrative in nature, such as issuing a report. Also, some agreements allow for adding more milestones as time goes on, and because the total number of milestones associated with those agreements is not yet known, progress is difficult to determine. Finally, many of the milestones not yet due involve some of the most complex and costly cleanup work to be undertaken. The cost of complying with these agreements is not specifically identified in the DOE budget. Individual DOE sites include the cost of the compliance when preparing their initial budget requests, but as DOE headquarters officials adjust individual site estimates to reflect national priorities and to reconcile various competing demands, the final budget does not identify what portion of the request reflects compliance requirements. However, compliance agreements are site-specific and are not intended to provide a mechanism for DOE to use in prioritizing risks among various sites.
The JASSM program began in 1995 and was to be an affordable, joint program between the Air Force and the Navy to meet an urgent need with a streamlined acquisition strategy. JASSM predecessor TSSAM was also planned to be a low-cost cruise missile able to deliver several different munitions. However, after several unsuccessful flight tests, the lead contractor for TSSAM initiated a reliability improvement program to address higher reliability requirements, but demonstration of whether problems had been resolved would have taken several years and cost more than $300 million. As costs for TSSAM increased, the Army ended its participation in the program and after a period of declining budgets and changes to threat scenarios, a cost and operational effectiveness analysis was completed, which showed that other options might be adequate to meet national security requirements. In 2004, the Navy left the JASSM program citing it as a redundant capability to other systems in its inventory. JASSM was expected to require minimal maintenance while in storage and life-cycle cost was to be controlled through improved reliability and supportability achieved during development. To execute the acquisition strategy and meet cost and schedule goals, the Air Force used Total System Performance Responsibility (TSPR). TSPR generally gives the contractor total responsibility for the entire weapon system and for meeting DOD requirements, with minimum government oversight. The Air Force made initial JASSM requirements flexible to allow Lockheed Martin to have clear control of the design and product baseline. Program officials stated this strategy was based on other successful programs, such as the Joint Direct Attack Munition program, and would allow the contractor flexibility to make changes to meet cost and schedule deadlines without having to consult with the government. An example of this flexibility was the mission missile effectiveness requirement. The effectiveness requirement is the minimum number of missiles required to kill specified targets and was named as a key performance parameter, allowing trades between reliability, survivability, and lethality. In other words, if the program was successful at achieving high levels of survivability and lethality, reliability could remain low, even fluctuate, and still meet the stated parameters. Quantities for JASSM were established by reviewing the threshold targets and determining the number of missiles necessary to meet operational damage criteria, based on missile performance using the effectiveness requirement. Therefore, changes to reliability would affect the quantities necessary to meet requirements. As part of the program’s 1995 acquisition strategy, the Air Force received five proposals for JASSM and in 1996 selected Lockheed Martin and McDonnell Douglas to begin a 24-month risk-reduction phase. Following the risk-reduction phase, the Air Force planned 32 months for development and a total of 56 months from program start to full-rate production in 2001. The program planned for concurrent developmental and operational testing and evaluation with four flight tests planned before initial production. The Air Force planned to have nine fixed-price production lots from 2001 through 2009 totaling 2,400 baseline missiles with an initial program cost estimate of $2.2 billion (fiscal year 2010 dollars). A former Air Force official who was an early JASSM program manager stated the Air Force accepted Lockheed Martin’s proposal, which included favorable fixed-price contract prices for production lots 1 through 5 with the understanding that the prices would increase after Lot 5. JASSM’s acquisition strategy planned for a 74 percent unit cost increase between Lots 5 and 6. The cost increase between Lots 5 and 6 was to occur at a time when quantities were increasing. Despite this planned cost increase, the production unit costs would have remained within the Air Force’s acceptable range established before the competition and at much less cost than TSSAM. Further, the prices offered by Lockheed Martin for the first five production lots were below the Air Force’s desired cost range for the system. Air Force officials said the low costs contributed to Lockheed Martin’s selection. However, to maintain the benefits of this pricing, the quantities purchased by the Air Force had to remain within a certain range for each of the first 5 years. While the Air Force planned for some cost growth in the original acquisition strategy, the program’s cost grew much more than expected. For the first four production lots, the Air Force benefited from the favorable prices in the original contract. However, because of funding limitations, it was not able to procure the minimum missile purchase in Lot 5 and had to renegotiate this lot with Lockheed Martin. In doing so, Lockheed Martin was able to renegotiate Lot 5 prices based on its actual production costs—at over $1 million per missile. Air Force documentation indicates that previously negotiated unit prices for Lot 1 through Lot 5 were as much as 45 percent less than Lockheed Martin’s actual costs. Subsequent lots that had not been negotiated under the original contract similarly reflected an increase in price. Most of this cost growth took place prior to 2006, culminating in a critical Nunn-McCurdy unit cost breach late in 2006. According to program documents, several causes have been cited for the critical Nunn-McCurdy unit cost breach: an unrealistic cost estimate resulting from a flawed acquisition strategy; the addition of 2,500 more expensive JASSM-ER variants; the costly efforts to overcome reliability problems; and reduced annual production rates for a longer period. Following the critical Nunn- McCurdy unit cost breach, the Air Force halted production of the missiles until DOD certified that the program should continue. The Under Secretary of Defense for Acquisition, Technology, and Logistics (USD/ATL) found no lower cost alternatives and certified the program in 2008, despite the missile’s higher than projected production costs. Since operational testing began in 2001, the reliability of the JASSM missile has been inconsistent. The Air Force flight-tested 62 baseline missiles from January 2001 through May 2007, resulting in 25 failures and 3 “no tests,” which was a 58 percent reliability success rate. However, because the program’s strategy allowed for the contractor to manage to mission effectiveness by combining reliability with other factors, the 58 percent reliability rate was sufficient to meet mission effectiveness criteria. The Air Force tracked reasons for flight test failures, but was not part of the failure review boards until production Lot 5, 5 years after the start of operational testing. Air Force officials stated that until 2006, Lockheed Martin handled all flight test failure review determinations and made corrective actions internally and the government was not heavily involved. During the Nunn-McCurdy certification process, USD/ATL directed the JASSM program to develop a reliability growth plan that would achieve 90 percent reliability for the baseline missile. The program set a goal of achieving this reliability rate by Lot 11, or fiscal year 2013. In addition, the JASSM-ER program set a reliability goal of 85 percent by Lot 4, or fiscal year 2014. In our 2000 report on JASSM, we recommended the Secretary of Defense revise its acquisition strategy for the JASSM program to be more closely linked to demonstrating that the missile design is stable and can meet performance requirements before making the production decision. DOD partially concurred with our recommendation stating that its acquisition strategy is directly linked to knowledge points, that it is linked to specific criteria established for making the low-rate initial production decision, and that the contractor is required to meet these criteria. We concluded that, while the Air Force had taken steps to link production decisions for JASSM to knowledge, we did not believe that the specific criteria established to support a production decision were sufficient to minimize cost and schedule risk. In June 2010, the Secretary of Defense announced an initiative to restore affordability and productivity in defense spending. He stated that there is a need to abandon inefficient practices accumulated in a period of budget growth and learn to manage defense dollars in a manner that is “respectful of the American taxpayer at a time of economic and fiscal stress.” He set a goal to save $100 billion over the course of the 5 year defense planning period. Subsequently, USD/ATL has issued guidance on delivering better value to the taxpayer and improving the way DOD does business. That guidance indicated that budget savings could be found by eliminating unneeded and costly programs and activities as well as by conducting needed programs and activities more efficiently, such as by stabilizing production rates. Subsequently, in a September 14, 2010, memorandum, USD/ATL provided specific guidance to acquisition professionals to achieve this mandate. That guidance included 23 principle actions to improve efficiency, including “Mandate affordability as a requirement” and “Drive productivity growth through Will Cost/Should Cost management.” Since 2007, the Air Force has enhanced its oversight of the JASSM program and made significant investments to improve its reliability as directed by USD/ATL. As a result of increased reliability testing and investments in reliability initiatives, the Air Force has identified many of the root causes for flight test failures. Since then, design changes and other corrective actions have improved JASSM baseline’s test results significantly—now demonstrating 85 percent success. The JASSM-ER variant has done well thus far, with no scored failures during the first seven flight tests. However, while JASSM baseline missile reliability has improved, it is not expected to achieve the USD/ATL-required level of 90 percent until 2013, and its operational effectiveness has not yet been demonstrated either through operational testing or use in a combat operation. In 2004, after two back-to-back flight test failures, the Air Force formed a reliability enhancement team to address what it considered the loss of confidence in JASSM’s performance, OSD’s concerns about the program, and budget reductions. The team’s report stated that while JASSM’s development and reliability were within acceptable ranges when compared to other cruise missiles, the JASSM program should increase testing to discover additional weaknesses in design or production as well as increase confidence in the level of reliability achieved and tie those results to contractor incentives. In 2007, after direction from USD/ATL and the Air Force during the Nunn-McCurdy certification process, the program office updated the Joint Reliability and Maintainability Evaluation Team and Test Data Scoring Board charters to significantly expand their role in management of system development, manufacturing, configuration changes, and testing. Since 2007, as a result of the Air Force’s increased attention to reliability testing and investments in reliability initiatives, the program has identified many of the root causes for reliability failures. While there is no single cause behind JASSM flight test failures, common failures occurred across JASSM subsystems including navigation, flight control, and propulsion. Most of the corrective actions to address the causes of the flight test failures affect missile hardware and many have been implemented in the current configuration for new production missiles. However, some flight test failure investigations are still ongoing. Those investigations are often difficult because of the lack of physical evidence after the flight test missile detonates on the White Sands missile range. As a result, identifying the root causes for failures were based on very extensive component testing at supplier facilities. Additionally, the root causes for several test failures were never conclusively determined as the failures may have resulted from aircraft or user malfunctions. Efforts to address significant reliability problems found during testing have contributed greatly to JASSM’s cost growth and schedule delays since the beginning of development. The Air Force has estimated that it may ultimately spend about $400 million through fiscal year 2025 on its reliability improvement initiatives. The Air Force has also increased lot acceptance testing of the fuses and implemented high-speed photography and screening improvements. In addition to forensic evaluation of the missile impact area, the Air Force also employs visual inspections and a built-in-test. The Air Force has also taken a variety of actions, in addition to flight testing, to improve JASSM’s reliability, including the following initiatives. Increased Oversight: The Air Force and Lockheed Martin have begun a process verification program to ensure suppliers follow prime contractor specifications. According to Air Force officials, the process verification program has allowed the Air Force to avoid unforeseen costs as some missile parts have become obsolete. Further, officials stated that it allows the JASSM program to catch problems earlier and plan on how to replace parts sooner. According to a program official, one process verification program team caught an obsolescence issue with a global positioning satellite receiver and was able to minimize the cost and production effect on the program. Missile Redesign: Program officials state that while wholesale missile redesign is not considered a cost-effective option, they are considering design changes and improvements at the component level. Increased Personnel: The program office has increased the number of government personnel supporting the process verification program and corrective action efforts. During Lot 1, the program had two staff members with production and manufacturing engineering expertise— by Lot 7, 22 staff members had such expertise. Improved Quality Assurance: In August 2006, the Air Force and Lockheed Martin implemented a quality assurance program. Lockheed Martin has implemented tests and improvement programs to increase user confidence in reliability and control costs. For example, Lockheed Martin officials stated that, to improve reliability, they have begun using a test that exposes electrical connections to higher voltages than they usually encounter during flight to make sure the wiring can handle a surge. Additionally, Lockheed Martin has increased the sample sizes of certain components they inspect and test. Recent tests of JASSM have demonstrated increased reliability. Since the Air Force’s reliability initiatives began in fiscal year 2007, the JASSM program has conducted 48 missile flight tests and 39 have been successful (2 were characterized as “no-test”) for a reliability rate of 85 percent. The current focus of JASSM baseline testing has been on improving the reliability of the missile. In the most recent tests of the JASSM baseline missiles produced in 2008, 15 of 16 flight tests were considered successful. In the one failure, the warhead did not detonate and the program is awaiting fuse recovery to make a determination of the root cause. The JASSM-ER is in developmental testing. Developmental testing of JASSM- ER is primarily addressing the differences of JASSM-ER from the baseline system (i.e., larger engine and fuel tanks) and will verify integration on the B-1 aircraft. All seven test flights of JASSM-ER have been successful. The program office is planning three additional integrated JASSM-ER tests to be flown before a production decision is made. In 2007, after USD/ATL’s decision to enhance JASSM reliability, the Air Force and Lockheed Martin agreed to focus on the inherent reliability of the missile and not take into account user error or platform malfunctions (i.e., carrier aircraft, aircrew instrumentation, range safety, etc.). Whereas operational testing is designed to evaluate the ability of JASSM to execute a mission, reliability testing is more narrowly focused on evaluating the missile’s performance during the mission. While mission failures were counted against the program during initial testing, more recent mission test failures have been declared “no tests.” For example, in early testing when a B-52 software issue resulted in an aborted mission and it was scored a test failure, this event would have been declared a no test under current missile reliability definitions. While recent flight testing of the baseline missile has shown improved missile reliability, the Air Force has not yet evaluated the operational effectiveness and suitability of the baseline JASSM with all corrective actions implemented. The JASSM program assesses operational effectiveness through operational testing, follow-on testing, and the weapon system evaluation program (routine tests of inventory assets). These flight test scenarios assess operational effectiveness in realistic combat scenarios against targets by determining reliability, evaluating capability and limitations by identifying deficiencies, and recommending corrective actions. In operational testing, the JASSM baseline program flight tested 38 missiles from June 2002 through May 2007 resulting in 19 failures and two no tests. While these tests identified issues with missile reliability, they also identified issues related to the B-52 aircraft, aircraft software, and fuse issues which negatively affected the operational effectiveness of the missile. This led to a 9-month suspension of testing in 2004 to address these issues. The improved JASSM baseline missile’s suitability was assessed by Air Force testers in 2008 and it was characterized as suitable and likely to meet reliability goals; however, operational testing of the effectiveness of the improved missile has not yet been scheduled. Current projections of JASSM costs have increased by over 7 percent since the Nunn-McCurdy certification in 2008. When taking into consideration the pre-2008 cost growth, which included the cost of adding the JASSM-ER variant, JASSM has grown from a $2.2 billion to a $7.1 billion program. In addition, while it has initiated several cost control measures, the Air Force appears to have limited options to reduce JASSM costs. Moreover, several areas of risk could add to those costs. First, the Air Force has not been able to provide enough annual funding to support the annual procurement levels used as the basis for its 2008 program cost estimate. That has led to a less efficient production process and a longer production period (most recently extended 5 years to 2025). Second, until the Air Force evaluates the effectiveness of the inventory JASSM baseline missiles with corrective actions for previously identified hardware and software issues, their viability and military utility is in question. If inventory missiles are found not to have utility, they may need to be replaced. If retrofitted missiles are found to be effective, the Air Force may still have to find additional funding to complete the retrofit process. Third, the Air Force plans to conduct many more flight tests to improve JASSM reliability from 85 to 90 percent. Finally, in comparing the capabilities and cost of JASSM to several domestic and international missile systems in 2008, the Air Force assumed that JASSM would cost about $1 million per unit, which is about 40 percent less than currently expected. Compared to original program estimates, JASSM’s currently projected costs are much higher because of (1) higher than anticipated production costs, (2) longer production period, (3) the addition of the JASSM-ER variant, and (4) reliability improvement efforts. Through fiscal year 2010, about 75 percent of the planned JASSM quantities have yet to be procured and, as a result, most of the program costs have yet to be incurred. Following the critical Nunn-McCurdy unit cost breach, the Air Force halted production of the missiles until USD/ATL certified that the program should continue. USD/ATL found no lower cost alternatives and certified the program in 2008, despite the missile’s higher than projected production costs. As a part of our review, we examined the cost estimates used by OSD to certify the program following the critical Nunn-McCurdy unit cost breach. This estimate used the actual costs of the missile since JASSM was well into the production phase at the time. Overall, the Air Force’s cost estimate substantially met our best practice standards in our Cost Guide. For a more in-depth discussion of our review of this JASSM cost estimate, see appendix III. Since the Nunn-McCurdy certification in 2008, the growth in JASSM’s projected program cost has moderated, rising about $500 million (from $6.6 billion to $7.1 billion) through 2025. Reliability enhancements to the JASSM missile instituted in 2007 and additional reliability testing have added the majority of the increase in program costs. These enhancements were implemented to meet USD/ATL’s 90 percent reliability goal which was set during the Nunn-McCurdy certification. Also, the Air Force decided to lengthen the program’s procurement schedule by another 5 years, buying the same number of missiles over a longer time period. That reduces the efficiency of the production processes and adds inflation to the cost estimate. Currently, on a per unit basis, the average procurement unit cost of a JASSM missile is projected to be about $1.2 million. JASSM- ER is expected to cost about $200,000 more than the average, about $1.4 million per unit. Since 2008, the Air Force has added several measures to control costs in the JASSM program, but the effect of these measures is not yet clear. Examples of these measures include the following. Contract Incentives: The Air Force has begun using fixed-price incentive (firm target) contracts for each lot to produce the missiles for less than projected. Greater Insight into Actual Costs: Air Force officials have increased insight into Lockheed Martin’s actual costs, which may make them more informed when negotiating new contracts. Program officials stated that, for example, they now know how many engineers are needed to perform a certain task and the number of hours it takes to assemble a missile. The Air Force can directly verify the costs charged by subcontractors. Increased Authority over Design: In recent contract negotiations, the Air Force gained approval authority over certain design changes that may affect current and future lots, including those that may increase cost, require retrofit, or affect safety. Previously, Lockheed Martin had full authority over most design changes. While the effectiveness of the cost control measures is not yet known, the Air Force appears to have limited options to actually reduce those costs. For example, annual production rates are expected to remain well below the levels projected at the start of the program. The 2008 Air Force cost estimate was based on an annual production rate of 280 missiles per year. However, that cost estimate may now be understated because the program has not produced that many missiles in a single year since 2005. For example, the Air Force’s procurement quantities for production Lot 7 and Lot 8 were 111 and 80 units, respectively, well below the economic order quantity of 175 missiles per year. Program officials stated that annual quantities below the economic order quantity will result in an increasingly inefficient production process and some key suppliers may shift from continuous to limited production. Further, Lockheed Martin officials stated that low production rates could cause skilled labor to look elsewhere for work and JASSM reliability could be adversely affected. The contractor has been able to maintain some level of production efficiency because of foreign military sales that make up for the reduced Air Force procurements. However, lower than projected annual procurement levels will increase production costs. A further challenge is the fact that JASSM’s design is mostly complete and there may be few opportunities to reduce production costs through redesign. As a result, average JASSM unit costs may remain in excess of $1.2 million indefinitely. The Air Force has plans to address the low reliability of missiles in its inventory by retrofitting some of its 942 missiles with hardware and software corrective actions. Program officials state the retrofit costs will be shared between Lockheed Martin and the Air Force, but the total cost to retrofit the missiles in inventory has not been calculated. However, previous efforts to retrofit JASSM missiles have proven to be problematic. An example of challenges associated with retrofitting missiles is adding telemetry instrumentation kits after the missiles have been produced and are in the inventory. Those kits are added to all missiles to be flight tested. This requires opening up the missile to insert telemetry after the stealth coating has been applied and increasing the number of electrical connections as compared to a production missile. Air Force officials stated the kit could add some reliability concerns when it is added to test missiles because the missiles were not designed to be opened after they were completed. Air Force officials also stated that workers have to reroute wires and remove the engine so that the self-destruct mechanism can be installed and all of this rework inside the missile has the potential to lead to more errors and cause additional reliability issues. The impact of retrofitting missiles has become evident in the weapon system evaluation program, which is operationally representative flight testing run by users of the system and focuses on the performance of missiles in the inventory. JASSM’s performance in this evaluation program has not been good, with 7 failures in 12 tests from 2006 through 2007, and with at least some of the failures attributable to the retrofit process. The addition of telemetry kits has also contributed to 3 no tests during other JASSM flight testing. The Air Force has not yet flight tested any of the JASSM inventory missiles that have been retrofitted with all of the corrective actions to address reliability issues. This type of test would be important in determining the viability of the current inventory of JASSM missiles and would be a key input in the Air Force deciding whether or not to retrofit the entire inventory of missiles. The Air Force plans more flight tests in the next few years of new production missiles to meet missile reliability goals. For the JASSM baseline missile to meet its reliability requirement, the Air Force is planning to conduct up to 48 additional flight tests, at a cost of about $120 million. In addition, most reliability issues with the baseline variant will directly affect the progress of the JASSM-ER variant as the missiles are at least 70 percent common in hardware and 95 percent common in software. Anything learned during these flight tests about the baseline applies to JASSM-ER. According to the Air Force, as many as 20 additional flight tests may be needed to fully demonstrate JASSM-ER’s reliability goal of 85 percent. The $190 million cost to achieve the final percentages of missile reliability reflects the fact that problems or weaknesses become harder to find and correct as the more obvious issues are corrected. Program officials are considering alternative means to meet user needs for a more reliable missile while reducing the cost of JASSM flight testing. As part of the Nunn-McCurdy certification process in fiscal years 2007 through 2008, DOD assessed whether there were readily available alternatives that provided as much or more capability as JASSM at lower cost. DOD assessed programs ranging from direct attack munitions to intercontinental range missiles. For the JASSM baseline missile, all of DOD’s existing programs were found to be less effective in terms of lethality, survivability, or capacity. The Navy’s Tomahawk missile was the closest alternative to meeting JASSM’s capability but it is not as lethal as JASSM. Also, the Tomahawk is launched from ships and not from aircraft, as the Air Force plans to use the capability. DOD also evaluated new or modified programs as possible alternatives to JASSM and JASSM-ER. The Air Force evaluated 12 domestic and international missile systems with projected unit production costs ranging from $600,000 to $2.8 million. JASSM-ER, estimated at the time of this evaluation to cost $1 million per unit, was more expensive than 5 alternative systems under consideration. In terms of performance, some alternatives were more capable than JASSM and some were not. All of the alternative systems were expected to require some up-front investment. Based on this analysis, no other alternative was found to provide greater or equal military capability at less cost than JASSM-ER. Later in the Nunn-McCurdy process, however, OSD cost analysts found that the costs of JASSM-ER would likely be at least $1.4 million per missile. That continues to be the projected unit cost of JASSM-ER and, as we discussed earlier in this report, there are cost risks that may drive that unit cost higher. Despite the higher production unit costs for JASSM-ER, the Air Force has not revisited the results of its assessment of alternatives. In light of the current cost projections, which are 40 percent higher than assumed in the previous assessment, JASSM-ER would be equal to the cost of an additional alternative. Further, the unit cost differential between JASSM-ER and the lower-priced alternatives may now be large enough to make those alternatives more competitive in terms of cost or capabilities. DOD’s 25-year history to acquire and field an affordable air-to-ground cruise missile has been a difficult one. After abandoning the expensive TSSAM program, the Air Force conceived the JASSM program using an acquisition strategy that minimized government oversight. After restructuring the program in 2008 and after considerable effort to improve reliability, the JASSM program as it exists today is much different than originally envisioned. A $2.2 billion, 11-year program to produce 2,400 missiles has become a $7.1 billion, 28-year program to produce 4,900 missiles. From a technical and capability standpoint, the program offers more now than the baseline missile did before 2008. Yet, the effectiveness of the new missiles remains to be demonstrated in operational testing, and low production rates, retrofit costs, and additional reliability testing could drive program costs higher. At this point, about 70 percent of the projected JASSM costs have not yet been incurred. In November 2011, DOD will decide whether to approve the Air Force’s request to start low-rate initial production of the JASSM-ER variant. Low-rate initial production is normally the last major milestone decision for an acquisition program. With the JASSM program now expected to extend through 2025 and about $5 billion yet to be spent, a reevaluation of its cost-effectiveness is warranted before such a commitment is made. This is particularly true given the Secretary of Defense’s recent initiative to improve the cost-efficiency of defense acquisition programs. At this juncture, the JASSM program would seem to be an excellent opportunity for DOD and Air Force leadership to take a hard look at the cost-effectiveness and efficiency of this important but costly defense program. We recommend that the Secretary of Defense defer the production decision for JASSM-ER until (1) the program’s likely costs and affordability are reassessed to take into account the feasibility and cost of retrofitting JASSM baseline missiles or replacing them, the cost of additional reliability testing against the likely improvement, and the effect of sustained low production rates; and (2) the results of the previous analysis of alternatives are reassessed in light of the likely costs of the JASSM program. In its comments on our draft report, DOD partially concurred with our recommendation. DOD stated that JASSM-ER is on track for a Milestone C low-rate initial production decision in November 2010. DOD also agreed that the rate of JASSM production has not been optimum and that it plans to address efficient production rates as part of the JASSM-ER Milestone C decision. DOD also stated that (1) there are no additional plans (nor is there a need) to retrofit fielded JASSMs above what has already been accomplished or is under way; (2) it has revisited various alternatives and reaffirms the continued validity of its 2008 conclusion that none of the alternative concepts provide comparable operational utility at or near a similar cost or schedule to JASSM; and (3) in the absence of viable alternatives, delaying the program further will increase costs and further postpone delivering a vital capability to the warfighter. DOD’s response is reprinted in appendix II. In concluding that retrofits to the inventory missiles may not be necessary, DOD does not address the viability of the current inventory of JASSM baseline missiles or the need to replace some or all of them. Until the Air Force evaluates the effectiveness of the inventory of JASSM baseline missiles with corrective actions for previously identified hardware and software issues, their viability and military utility will still be in question. In addition, DOD states that the Air Force has revisited its earlier assessment of alternatives to JASSM and again found that there are none with comparable utility, cost, or schedule. This is new information and DOD did not provide details for us to assess, including whether the Air Force factored in the higher current projections of JASSM costs. Finally, DOD did not address the part of our recommendation dealing with the cost of additional reliability testing against the likely improvement. To the extent DOD has made decisions on retrofits and reconsideration of alternatives, these are positive signs, as is its agreement to address the efficiency of production rates. At this point, it is not clear whether the reliability of the existing baseline inventory missiles is acceptable or whether additional reliability testing is warranted. These determinations are necessary to establish the full value and cost of the JASSM program. Beyond these steps, it is incumbent upon the department to reexamine JASSM before making the production decision to ensure that the program is structured as efficiently as possible and is still a good investment given the other demands DOD faces. DOD’s agreement to address the efficiency of JASSM production rates is a positive step. This is particularly important given the Secretary’s current efficiency and affordability initiative. DOD needs to ensure that it has the information available to fully assess the JASSM investment before making the production decision. If DOD needs more time, then we believe the decision could be delayed. We also received several technical comments from DOD and the Air Force and have made other changes to our report. We are sending copies of this report to the Secretary of Defense and interested congressional committees. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine Joint Air-to-Surface Standoff Missile’s (JASSM) current production unit costs and the extent they have grown, we analyzed JASSM’s contracts, budgets, and compared the program’s selected acquisition reports. We analyzed Nunn-McCurdy documentation and certification criteria as well as Air Force and Lockheed Martin data to determine the causes for the cost growth and critical breaches. We interviewed officials with the JASSM joint program office; Lockheed Martin; the Office of Secretary of Defense Cost Assessment and Program Evaluation; and a former program official. To determine the extent to which Department of Defense’s (DOD) cost estimating policies and guidance support the development of high-quality cost estimates, we analyzed the cost estimating practices of the Cost Analysis Improvement Group (CAIG), now known as the Cost Analysis and Process Evaluation (CAPE), in the development of life-cycle cost estimates for the Air Force’s Joint Air-to-Surface Standoff Missile Program (baseline and JASSM-ER variants), against the 12 best practices of a high- quality cost estimate as defined in our Cost Estimating and Assessment Guide. We assessed each cost estimate, used in support of the critical Nunn- McCurdy unit cost breach, against these 12 key practices associated with four characteristics of a reliable estimate. As defined in the guide, these four characteristics are comprehensive, well-documented, accurate, and credible, and the practices address, for example, the methodologies, assumptions, and source data used. We also interviewed program officials responsible for the cost estimate about the estimate’s derivation. We then characterized the extent to which each of the four characteristics was met; that is, we rated each characteristic as being either Not Met, Minimally Met, Partially Met, Substantially Met, or Fully Met. To do so, we scored each of the 12 individual key practices associated with the four characteristics on a scale of 1-5 (Not Met = 1, Minimally Met = 2, Partially Met = 3, Substantially Met = 4, and Fully Met = 5), and then averaged the individual practice scores associated with a given characteristic to determine the score for that characteristic. To determine the results of the most recent tests and if corrective actions have been implemented for previous test failures, we analyzed JASSM flight test results and failure review board findings to determine scoring criteria and results to determine what corrective actions were implemented. We determined whether recent flight test results are representative of the entire fleet by comparing and evaluating the lot-by- lot missile configuration changes and retrofit activities. We interviewed officials with the JASSM joint program office; Lockheed Martin; Office of Under Secretary of Defense for Acquisition, Technology, and Logistics; Director, Operational Test and Evaluation; Air Combat Command; Secretary of the Air Force for Acquisition; Office of Secretary of Defense Cost Assessment and Program Evaluation; Joint Staff; Air Force Directorate of Test and Evaluation; and a former Air Force official who was an early JASSM program manager to better understand the program’s objectives and original acquisition strategy. We discussed recent DOD reliability initiatives with Office of the Director, Operational Test and Evaluation officials. To determine what the Air Force has done to control and reduce production costs while improving reliability, we examined JASSM’s contracts to see what provisions have been added as well as Air Force and Lockheed Martin data. We interviewed officials with the JASSM joint program office; Lockheed Martin; and Office of Secretary of Defense (OSD) test organizations to determine if testing reflects the current effectiveness of the missile. We reviewed our prior work on best practices for a knowledge-based approach for acquisition programs in determining if the Air Force’s approach to beginning the JASSM-ER program meets best practices. We compared requirements and other documents to see if the JASSM-ER missile reflects lessons learned from the baseline variant as well as increased knowledge and oversight from the government. We compared the baseline design with JASSM-ER to determine commonality. We interviewed officials with the JASSM joint program office; Lockheed Martin; Office of Under Secretary of Defense for Acquisition, Technology, and Logistics; Office of Secretary of Defense Cost Assessment and Program Evaluation; Director of Operational Test and Evaluation; Air Combat Command; Secretary of the Air Force for Acquisition; Joint Staff; and a former program official to the determine acquisition planning leading up to JASSM-ER’s production decision and what initiatives have been taken to control costs. After reviewing documentation submitted by the JASSM program office, conducting interviews, and reviewing relevant sources, we determined the CAIG’s life-cycle cost estimate’s totaling $7.1 billion for both programs— the JASSM baseline cost estimate was $3.4 million while the JASSM-ER variant cost estimate of $3.7 million—Fully Met one and Substantially Met the other three characteristics of a reliable cost estimate, as shown in Table 3 below. We assessed 12 measures consistently applied by cost estimating organizations throughout the federal government and industry and considered best practices for the development of reliable cost estimates. We analyzed the cost estimating practices used by CAIG in developing the life-cycle cost estimates for both programs against these 12 best practices and the findings are documented in table 3 below. The following explains the definitions we used in assessing CAIG’s cost estimating methods used in support of the critical Nunn-McCurdy unit cost breach: Fully Met—JASSM program office provided complete evidence that satisfies the entire criterion; Substantially Met—JASSM program office provided evidence that satisfies a large portion of the criterion; Partially Met—JASSM program office provided evidence that satisfies about half of the criterion; Minimally Met—JASSM program office provided evidence that satisfies a small portion of the criterion; and Not Met—JASSM program office provided no evidence that satisfies any of the criterion. The sections that follow highlight the key findings of our assessment. Though the cost estimates accounted for all possible costs and were structured in such a manner that would ensure that cost elements were omitted or double-counted, neither the JASSM baseline nor JASSM-ER had a Work Break Down Structure (WBS) dictionary that defined each element. In addition, the JASSM baseline variant provided no evidence that risks associated with the ground rules and assumptions were traced back to specific cost elements. All applicable costs including government and contractor costs were included in the estimates—The cost estimates included sunk costs such as contractor program management, overhead, system design, and development and testing. In addition, the program office outlined the cost estimating methodology, basis of the costs, as well as development costs for JASSM-ER and other government costs. The cost estimates’ level of detail ensure that no costs were omitted or double-counted—The cost estimates are based on a product-oriented WBS which is in line with best practices. For example, the cost estimate is broken down into various components such as the propulsion, payload, airframe, and guidance and control and also includes supporting cost elements such as systems engineering, program management, and system test and evaluation. As a result, all of the system products are visible at lower levels of WBS providing us with confidence that no costs were omitted or double- counted. WBS has been updated as the JASSM baseline and JASSM-ER programs have evolved; however, there is not an accompanying dictionary that defines each element and how it relates to others in the hierarchy. Ground rules and assumptions were largely identified and documented—The JASSM baseline cost estimate documentation included a list of risk model inputs based on WBS elements. Although WBS elements such as engineering support, subcontractor, and warranty were identified, there was no discussion of risk upon assumptions that drive costs such as product reliability, sustainability of subcontractors, or schedule variability. Like the JASSM baseline cost estimate documentation, the JASSM-ER cost estimate documentation also included a list of ground rules and assumptions; however, there was evidence that risk associated with the fuel tank assumption was traceable to a specific WBS element. In separate documentation, we were able to identify where the program office considered risks for the JASSM baseline estimate. Both cost estimates were documented in enough detail that would allow an analyst unfamiliar with the program to recreate the estimate and get the same result. In addition, the briefing to management was detailed enough to show that the estimates were credible and well documented. The cost estimate is fully documented—For the JASSM baseline and JASSM-ER, the cost estimate documentation included a report documentation page identifying the report date, title, contract number, report authors, and other information. The documentation also included a table of contents, introduction, purpose, and structure of the document as well as the scope of the estimate, a list of team members, the cost methodology, and a system description. The documentation discussed a risk and sensitivity analysis, costs broken out by WBS elements including data sources and estimating method and rationale, and provided evidence that the estimates were updated using actual costs. In a separate briefing, the program office outlined the cost estimating methodology, basis of the costs, as well as development costs for JASSM-ER and other government costs. The program office also provided a copy of the cost sufficiency review of the estimate, which included the estimate’s purpose and scope, technical description and schedule, ground rules and assumptions, data sources and analysis, and methodology. For both programs, the estimate documentation and the cost analysis requirements document (CARD) addressed best practices and the 12 steps of a high-quality estimate. Contingency reserves and the associated level of confidence for the risk-adjusted cost estimate were also documented. Electronic versions of the cost estimates were also provided. The estimate documentation describes how the estimate was derived—The point estimate was developed primarily using actual costs, with a few cost elements estimated based on learning curves method. Actual sunk costs for prior years were presented and remaining production lot costs were based on a labor staffing assessment and the latest contractor labor rates. Cross-checks were performed and no instances of double-counting were visible. A separate document was provided that showed in detail how the cost estimate was developed, what data were used to create the cost estimate, and how risks were quantified to determine a level of confidence in the cost estimate. The estimates were reviewed and approved by management— The estimates were presented by OSD CAIG to the OSD overarching integrated product team for consideration as the new acquisition program baseline. In November 2009, the team provided a detailed overview of the JASSM program which addressed the major cost growth factors, such as the addition of the JASSM-ER variant, reliability enhancements, and the reduction in missile purchases. Both cost estimates were unbiased and represented most likely costs. For example, the estimates were adjusted to reflect risks and the program office also included requirements in the new Lot 8 contract that would allow them to update the cost estimates with actual data. The cost estimates were adjusted for inflation—The JASSM program office used the February 2009 version of the OSD inflation rates provided by the Secretary of the Air Force/Financial Management Cost and Economics. The estimates were developed and documented in base year 1995 dollars and inflated using the weighted rates applicable to the appropriations in the estimate. Base year 1995 is the program’s designated base year. The cost estimates included most likely costs—Per the Nunn- McCurdy certification process, the CAIG developed independent cost estimates for the JASSM baseline and JASSM-ER development and procurement costs as well as future-year resource requirements for the baseline and JASSM-ER variants. Operating and support costs as well as software costs were also included in the estimates. The JASSM baseline life-cycle cost estimate of $3.4 billion, which spans a period of time from 2001 through 2015, was estimated at the 77 percent confidence level, while the JASSM-ER life-cycle cost estimate of $3.7 billion spans the period from 2011 through 2025 and was estimated at the 73 percent confidence level. The cost estimates have not been updated to reflect current costs—Though the JASSM baseline estimate dated April 2008 was updated to reflect new program changes, the CARD has not been updated since May 2003. Examples of JASSM baseline changes include additional reliability enhancement team improvements and additional testing, which are not reflected in the 2003 CARD. However, when comparing the JASSM WBS dated January 1999 and the Lot 8 contract dated January 2010, it is evident that WBS has been updated as changes have occurred. On the other hand, the CARD for the JASSM-ER was updated as of August 2009. Updates to the JASSM-ER CARD include a new, more powerful engine than the baseline variant. As part of the Milestone C process, work is currently under way by CAPE to update the JASSM-ER cost estimate. The program office said that the JASSM- ER cost model will include updated costs based on the Lot 8 proposal data, updated quantity profiles, and January 2010 revised inflation rates. The program office is in the process of updating labor rates and overhead rates and is reexamining all component prices. While the cost estimates addressed risk and uncertainty as well as sensitivity, the estimates failed to address the risks regarding reliability and changes to the production schedule. By not doing so, the program office may not have a full understanding of the future effects to the overall cost position of these two programs. The estimates were assessed for risk and uncertainty—Both programs identified engineering and test support, subcontractors, and warranty as major risk elements. However, the analysis did not identify reliability or an increase in the production schedule as possible risk factors. During the Nunn-McCurdy certification process, the DOD’s analyses found that the cost breach was driven by four primary factors, two of which focused on reliability. As a result, the program office instituted a reliability enhancement program directed to address reliability concerns. An indirect effect of the enhancement program was an increase in the overall missile costs. The December 2009 selected acquisition report identified increases to the missile hardware cost due to reduced annual quantities, missile production breaks, and increased test requirements and reliability programs. The combined cost estimate, for the JASSM baseline and JASSM-ER variants, has grown significantly over time. By not including reliability and the extension of the production schedule as possible risk factors, cost growth could continue to occur in future production lots. As a result, the programs’ calculated point estimate confidence level of 77 percent for baseline and 73 percent for the JASSM and JASSM-ER variants may be overstated. The estimates were assessed for sensitivity—For both the JASSM baseline and JASSM-ER estimates, key cost drivers were identified. The cost estimators examined eight cost factors for the JASSM baseline estimate and 13 cost factors for the JASSM-ER estimate. For the JASSM analysis, engineering support, testing support, other subcontractors, and Teledyne propulsion had the greatest impact on the total variance in the estimate. Engineering support showed a 14 percent impact, followed by test support with a 13 percent impact, other subcontractors with a 12 percent impact, and Teledyne propulsion with a 9 percent impact. These four elements account for 68 percent of the total cost before risk was applied. For the JASSM-ER analysis, the Williams propulsion, other subcontractors, engineering support, and testing support showed the greatest impact on the total cost variance in the estimate. The Williams propulsion had a 15 percent impact, followed by a 15 percent impact for other subcontractors, an 11 percent impact for engineering support, and a 9 percent impact for test support. These four elements account for 76 percent of the total cost before risk was applied. The cost estimates were checked for errors—Cross-checks were performed and no instances of double-counting were visible. The Lot 5 and Lot 6 estimates were compared back to Lot 1 through Lot 4 for consistency and reasonableness. Also, multiple row and column summation cross- checks were performed to avoid duplication and omission errors. Upon review of the electronic cost model, GAO found no instances of double- counting and the spreadsheet calculations are accurate given the input parameters and assumptions. The cost estimates were validated against an independent cost estimate—The CAIG estimate is the independent cost estimate. As part of the Nunn-McCurdy certification process, the CAIG developed an independent cost estimate for the development and procurement costs as well as future-year resource requirements for the baseline and JASSM-ER variants. This new independent estimate was a joint effort by the OSD CAIG, the program office, and the Financial Management Center of Expertise, so there was no other estimate for comparison. Per the Nunn- McCurdy JASSM certification package dated April 30, 2008, the CAIG estimate of the acquisition costs for the restructured JASSM program is $7.1 billion, which is directly comparable to the $6 billion estimate reported in the quarterly selected acquisition report dated December 2007. Michael J. Sullivan, (202) 512-4841 or [email protected]. In addition to the contact name above, the following individuals made key contributions to this report: William Graveline (Assistant Director), John Crawford, Morgan DelaneyRamaker, Tisha Derricotte, Michael J. Hesse, Karen Richey, Hai Tran, and Alyssa Weir.
Over the past two and a half decades, the Department of Defense (DOD) has invested heavily to acquire a cruise missile capable of attacking ground targets stealthily, reliably, and affordably. After abandoning an earlier, more expensive missile and a joint service effort, the Air Force began producing the Joint Air-to-Surface Standoff Missile (JASSM) in 2001. After that, the program (1) encountered many flight test failures, (2) decided to develop an extended range version, and (3) recognized significant cost growth. The production decision for the JASSM-ER is planned for November 2010. Also, the Secretary of Defense has recently announced a major initiative to restore affordability and productivity in defense spending. This initiative is expected to, among other things, identify savings by conducting needed programs more efficiently. As DOD faces the initial production decision on JASSM-ER, GAO was asked to assess (1) most recent test results, correction of causes of previous flight test failures, and efforts to improve JASSM's reliability; and (2) JASSM cost changes, efforts to control costs, and additional cost risks for the program. Since 2007, design changes and other corrective actions by the Air Force have improved the baseline JASSM's test results significantly--the missile has now demonstrated 85 percent success versus 58 percent achieved previously and before the corrections. The JASSM-ER variant has done well thus far, with no failures during the first seven flight tests. These results reflect the Air Force's enhanced oversight of the program and significant investments made to improve reliability. These efforts also identified many of the root causes for flight test failures. While baseline JASSM missile reliability has improved, it is not expected to achieve the Under Secretary of Defense for Acquisition, Technology and Logistics' required level of 90 percent until 2013. Tests conducted thus far of the improved baseline JASSM and the JASSM-ER variants have been developmental--or controlled--in nature. Neither the improved JASSM baseline missile nor the JASSM-ER has been demonstrated in operationally realistic testing or in a combat operation. JASSM costs have increased by over seven percent since the program was restructured in 2008. As the table shows, since 1998, JASSM quantities have more than doubled and estimated program costs have grown from $2.2 billion to a $7.1 billion. The Air Force has taken several steps to control JASSM costs, but options to reduce costs at this point appear limited. In fact, several factors suggest additional cost growth is likely. First, the Air Force has not been able to provide enough funding to produce the missiles at planned rates. That has led to a less efficient production process, a longer production period, and higher costs that have not yet been reflected in the $7.1 billion estimate. Second, the Air Force's potential plans to retrofit existing missiles with the reliability improvements may not be feasible, given the missile's sensitivity to being reopened. If retrofits prove infeasible, new replacements may have to be purchased; if they are feasible, the Air Force may have to provide additional funding to retrofit all existing missiles. Finally, since the Air Force last compared JASSM to possible alternatives, the unit cost was assumed to be about 40 percent less than currently expected and that now could make alternatives more competitive in terms of cost and/or capabilities. A reevaluation of the JASSM program, given that most of its costs have yet to be incurred, is warranted before the decision to produce the JASSM-ER is made. GAO recommends that the Secretary of Defense reevaluate the JASSM program's affordability and cost-effectiveness before making the decision to produce the JASSM-ER. DOD partially concurred with GAO's assessment, but believes the JASSM-ER should begin production in November 2010. GAO believes that it is incumbent upon the department to reexamine JASSM before making the production decision to ensure that the program is structured as efficiently as possible and is still a good investment given the other demands DOD faces.
In 1995, the District of Columbia established the Highway Trust Fund, as required by the District of Columbia Emergency Highway Relief Act. This dedicated trust fund is required to include amounts equivalent to receipts from motor fuel taxes and to be separate from the District’s General Fund. For fiscal year 1999, motor fuel tax revenues were reported to be almost $31 million. The Fund is used to reimburse the District for local capital appropriated expenditures, which are (1) the District’s share (normally 20 percent) of federal aid highway project costs, (2) the salaries of District personnel working directly on transportation capital projects, (3) overhead costs associated with federal aid projects, and (4) other nonparticipating costs. All federal and local capital appropriated expenditures are paid out of DPW’s Capital Operating account and then reimbursed by either the Department of Transportation’s Federal Highway Administration (FHWA) or the Fund. DPW is responsible for processing, accounting for, and reporting on the Fund’s financial activities. To accomplish these functions, DPW relies on the System of Accounting and Reporting (SOAR), which is developed and maintained by OCFO. The District also uses SOAR to manage certain District-wide purchasing and financial reporting activities. OCFO maintains SOAR, along with other District payroll, personnel, and tax information, on a computer system at its SHARE computer center. In fiscal year 1999, the District’s two payroll and personnel applications—the Unified Pay and Personnel System and the Centralized Automated Payroll and Personnel System—accounted for more than $1.5 billion in reported expenditures relating to the District payroll and employee benefits. In addition, tax applications residing on this computer system controlled District sales and use, employer withholding, corporate franchise, unincorporated franchise and hotel, personal property, and individual income tax revenues for fiscal year 1999. DPW also relies on its own local area network (LAN), the District’s wide area network (WAN)—which is managed by OCTO—and the Internet to transfer Fund information to and from the SHARE computer center. The District’s WAN not only allows DPW staff to access systems maintained at the SHARE computer center, but also connects other District organizations—such as the Metropolitan Police Department, the District General Hospital, and the District public school system—to these systems and systems at the District’s other five data centers. In addition, some District financial information is maintained on the network. For example, the network-based Real Property Tax 2000 system contains land records, facilitates data analysis for property valuation and tax administration, maintains all District real property tax roll and levy entries, and supports automated management of real property tax accounts receivable adjustments, payment posting, and billing information. Altogether, the District’s WAN serves about 30 sites, which support approximately 60 District agencies and offices. To secure, protect, and preserve District information systems, such as those relied on to account for Fund and other District financial activities, District law requires the Mayor to establish, maintain, and provide consistent computer security policies, principles, and standards for all District departments and agencies. More specifically, District law tasks OCTO with coordinating the development of information management plans, standards, systems, and procedures throughout the District government. Our objective was to evaluate the design and test the overall effectiveness of information system general controls over the Fund’s financial systems, which are maintained and operated by three District organizations: DPW, OCFO, and OCTO. These information system general controls, however, also affect the security and reliability of other sensitive data, including District financial, payroll, personnel, and tax information, that is maintained on the same computer system as the Fund’s financial information. Specifically, we evaluated information system general controls intended to protect data and application programs from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties involving application programming, system programming, computer operations, information security, and quality assurance; assure recovery of computer processing operations in case of a disaster or other unexpected interruption; and ensure adequate computer security program management. To evaluate these controls, we identified and reviewed District policies and procedures, conducted tests and observations of controls in operation, and held discussions with DPW, OCFO, and OCTO staff to determine if information system general controls were in place, adequately designed, and operating effectively. Our evaluation was based on (1) our Federal Information System Controls Audit Manual (FISCAM), which contains guidance for reviewing information system controls that affect the integrity, confidentiality, and availability of computerized data, and (2) the results of our May 1998 study of security management best practices at leading organizations, which identifies key elements of an effective information security program. We performed our work from June through August 2000 in accordance with generally accepted government auditing standards. Because the objective of our work was to assess the overall effectiveness of information system general controls, we did not fully evaluate all computer controls. Consequently, additional vulnerabilities could exist. We requested comments on a draft of this report from the District’s Chief Technology Officer. She provided us with written comments, which are discussed in the “Agency Comments” section and reprinted in appendix I. A basic management objective for any organization is to protect its data from unauthorized access and prevent improper modification, disclosure, or deletion of financial and sensitive information. Our review of the District’s information system general controls found that they were not adequately protecting the Fund’s financial activities or other District financial, payroll, personnel, and tax information that also reside at OCFO’s SHARE computer center. Specifically, the District had not adequately limited access granted to authorized users, properly managed user IDs and passwords, effectively maintained system software controls, or sufficiently protected its networks and other computer systems from unauthorized users. In addition, the risks created by these access control weaknesses were compounded because the District was not routinely monitoring access activity to identify and investigate unusual or suspicious access patterns that could indicate unauthorized access. Consequently, District systems, programs, and data maintained at OCFO’s SHARE computer center risk inadvertent or deliberate misuse, fraudulent use, and unauthorized alteration or destruction occurring without detection. District management has recognized the weaknesses we identified and has expressed its commitment to improving information system controls. Subsequent to our fieldwork, District officials provided us with action plans that, if implemented properly, should correct the weaknesses we identified. The following sections summarize the results of our review of information system general controls over the District financial systems used to manage Fund operations. A key weakness in the District’s internal controls was that it was not adequately limiting the access of employees and other authorized users to Fund and other District financial, payroll, personnel, and tax information maintained at OCFO’s SHARE computer center. Organizations can protect information from unauthorized changes or disclosures by granting employees authority to read or modify only those programs and data necessary to perform their duties. However, we found several examples where the District had not adequately restricted the access of legitimate users on the computer system that maintains Fund and other District financial, payroll, personnel, and tax information. The District allowed all of the more than 4,300 active user IDs full access to 20 system software libraries that are used to perform sensitive system functions that can be used to circumvent all security controls. Such access increased the risk that users could bypass security controls to alter or delete any computer data or programs on this system. Security software on the system that maintains Fund and other District financial, payroll, personnel, and tax information was not implemented to automatically deny unauthorized access attempts. We determined that 689 access rules controlling access to data and program files, including a system software library that could be used to bypass other security controls and a payroll library that contained check processing data, were set to generate a warning message when access violations occurred, but permit the unauthorized access to proceed. Consequently, risk of improper access and changes to critical data files and programs occurring without detection is heightened. More than 265 user IDs on the system used to process Fund and other District financial information were granted the tape bypass label processing privilege that allows users to read and alter any tape regardless of other security software controls. These users included network support staff, database administrators, SOAR application programmers, payroll staff, Department of Human Services staff, and certain application users. As a result, these users have unlimited access to all tape files, including system audit logs and backup copies of sensitive financial and tax information. One reason for the District’s user access problems was that access authority was not being reviewed. Such reviews would have allowed the District to identify and correct inappropriate access. OCFO officials told us that SHARE computer center staff had changed the security software configuration so that all unauthorized attempts are denied and restricted the tape bypass label processing privilege to only those users with a specific business need. OCFO officials also told us that SHARE computer center staff would complete reviewing and limiting access to sensitive system libraries by March 31, 2001. In addition, OCFO officials stated that procedures to periodically review (1) access granted to sensitive system files, (2) security software configuration settings, and (3) access activity allowed by the tape bypass label processing privilege for appropriateness would be implemented by March 31, 2001. In addition to overseeing user access authority, it is also important to actively manage user IDs and passwords to ensure that users can be identified and authenticated. To accomplish this objective, organizations should establish controls to maintain and protect the confidentiality of passwords. These controls should include requirements to ensure that IDs uniquely identify users; passwords are changed periodically, contain a specified number of characters, and are not common words; default IDs and passwords are changed to prevent their use; and the number of invalid password attempts is limited to preclude password guessing. Organizations should also evaluate the effectiveness of these controls periodically to ensure that they are operating effectively. At the District, however, user IDs and passwords were not being managed to sufficiently reduce the risk of unauthorized access to the computer system that maintains Fund and other District financial, payroll, personnel, and tax information. For instance, the system was configured in a manner that did not always require passwords for user authentication. In addition, passwords that existed were not prevented from being (1) fewer than six characters, (2) the same as the user ID, or (3) other easily guessed words. Further, users were allowed the opportunity to circumvent password change requirements by reusing the same password over and over. Consequently, the District faced increased risks that passwords could be compromised to gain unauthorized access to financial and other sensitive information maintained on this computer system. OCFO officials told us that SHARE computer center staff had changed password control settings to require passwords to contain at least six characters and prevent passwords from being easily guessed words, such as the user ID. We also found instances where the District was not promptly removing unused or unneeded IDs or deleting IDs for terminated employees. For example, more than 1,400 user IDs had not been used for at least 7 months. Allowing inactive IDs to persist poses needless risk that unnecessary IDs will be used to gain unauthorized access. We also found cases where terminated employees were provided the opportunity to sabotage or impair Fund and other District financial operations because their user IDs were not promptly disabled. OCFO officials told us that SHARE computer center staff would implement procedures to ensure that inactive IDs and IDs for terminated employees are promptly disabled no later than March 31, 2001. It is also essential to control access to and modification of system software to protect the overall integrity and reliability of information systems. System software controls, which limit and monitor access to the powerful programs and sensitive files associated with computer system operation, are important in providing reasonable assurance that access controls are not compromised and that the system will not be impaired. If controls in this area are not adequate, system software might be used to bypass security controls, gain unauthorized privileges that allow improper actions, or circumvent edits and other controls built into application programs. The District was not properly controlling system software to prevent access controls on the computer system used to process Fund and other District financial, payroll, and tax applications from being circumvented. The system software control weaknesses we identified diminish the reliability of financial and other sensitive information maintained on this computer system and increase the risk of inadvertent or deliberate misuse, fraudulent use, improper disclosure, and disruption. In addition, we identified system software configuration weaknesses that could allow users to bypass access controls and gain unauthorized access to Fund and other District financial, payroll, personnel, and tax information. For example, the operating system was set up in a manner that allowed programs in any of the 74 libraries included in the normal search sequence to perform sensitive system functions and operate outside of security software controls. Because users generally have access to such libraries, this greatly increases the risk that unauthorized programs could be introduced to bypass other access controls and improperly access or modify financial, audit trail, or other sensitive information maintained on this computer system. Further, the District had not instituted processes to control changes to system software on this computer system. In the past 2 years, OCFO had implemented several major system software changes, such as installing new versions of database management, communication, access control, and operating system software. However, it was not maintaining a comprehensive log of system software changes, consistently documenting these changes and related test results, or independently testing system software changes before implementation. Consequently, the District faces increased risks of unintended operational problems caused by programming errors or the deliberate execution of unauthorized programs that could compromise security controls. The District was also not adequately reviewing programs in sensitive system libraries to identify and correct weaknesses that could be used to circumvent security controls. Consequently, we found potential problems that, at a minimum, diminish the reliability of system software, but could also be exploited to introduce malicious code or circumvent other access controls. For example, 13 files capable of performing sensitive system privileges did not exist on the volume specified in the table used to manage such files. This increases the risk that unauthorized programs could be substituted for these files without management approval and used to bypass other security controls or inappropriately modify audit trails or sensitive data. Until the District begins actively managing programs in sensitive system software libraries, it will not have adequate assurance that other security controls cannot be bypassed. OCFO officials told us that SHARE computer center staff would implement policies and procedures by June 30, 2001, to (1) review system configuration settings periodically for appropriateness, (2) ensure that system software changes are authorized, independently tested, documented, and approved prior to implementation, and (3) evaluate programs in sensitive system libraries to identify and correct potential problems. The risks associated with the access and system software control problems we identified were also heightened because the District was not adequately protecting access to its networks or restricting access to the system that processes Fund and other District financial applications from the Internet. We found several network user ID and password management weaknesses that could be exploited to gain unauthorized access to District systems. For example, a common default account was available on one DPW network server. In addition, certain network systems on the DPW LAN and/or District WAN were not set up to require password authentication, ensure that passwords were changed periodically, or disable user IDs after a specified number of invalid password attempts. In addition, network system software configuration weaknesses could allow users to bypass access controls and gain unauthorized access to District networks or cause network system failures. For instance, certain network servers and routers were set up in a manner that permitted unauthorized users to connect to the network without entering valid user IDs and password combinations. This could allow unauthorized individuals to obtain access to system information describing the network environment, including user IDs, password properties, and account details. These network security weaknesses not only increased the risk of unauthorized access to information maintained on the network, but also heightened the risk that intruders or authorized users with malicious intent could exploit the user ID and password management weaknesses described above to misuse, improperly disclose, or destroy Fund and other District financial and sensitive information. DPW officials told us that they planned to correct the network ID, password, and system software configuration weaknesses we identified on the DPW LAN. The risks created by the access control problems described above were also heightened significantly because the District was not adequately monitoring system and user activity. Such a program would include (1) network monitoring to promptly identify attempts by unauthorized users to gain access to District systems and (2) examining attempts to access sensitive information once entry to District systems is accomplished. Without these controls, the District has little assurance that improper attempts to access sensitive information would be detected in time to prevent or minimize damage. The District organizations we visited had not implemented proactive network monitoring programs. Such a program would require the District to identify suspicious access patterns, such as repeated failed attempts to log-on to the network, attempts to identify systems and services on the network, connections to the network from unauthorized locations, and efforts to overload the network to disrupt operations, and implement intrusion detection systems to automatically log unusual activity, provide necessary alerts, and terminate sessions when necessary. The District had not installed intrusion detection software on its WAN. In addition, DPW was using available intrusion detection capabilities on only 2 of its 22 network segments. Further, a network server used to allow access through the Internet to the computer system that maintains Fund and other District financial and sensitive information was configured to not log any access activity. DPW officials told us that they would review all network servers and activate intrusion detection capabilities on all servers with these capabilities. OCTO officials told us that in conjunction with their implementation of the District security management program planned for October 1, 2001, a central security group will be established that, among other things, will implement intrusion detection systems to identify suspicious access activities and notify appropriate agency personnel. In addition, the District was not actively monitoring user access activity— to identify and investigate failed attempts to access sensitive data and resources or unusual patterns of successful access to such information— on the computer system used to process Fund and other District financial, payroll, personnel, and tax information. Routinely monitoring the access activities of authorized users, especially those who have the ability to alter sensitive programs and data, can help identify significant problems and deter users from inappropriate and unauthorized activities. Because the volume of security information available is likely to be too voluminous to review routinely, the most effective monitoring efforts are those that selectively target specific actions. These monitoring efforts should include provisions to identify and investigate unusual or suspicious patterns of access, such as updates to security files that were not made by security staff, changes to sensitive system files that were not made by system modifications to production application programs that were not initiated by production control staff, revisions to production data that were completed by system or deviations from normal patterns of access to Fund and other District financial, payroll, personnel, and tax data. The District could develop such a program by (1) identifying sensitive system files, programs, and data files on its computer systems and networks, (2) using the audit trail capabilities of its security software to document both failed and successful access to these resources, (3) defining normal patterns of access activity, (4) analyzing audit trail information to identify and report on access patterns that differ significantly from defined normal patterns, (5) investigating these potential security violations, and (6) taking appropriate action to discipline perpetrators, repair damage, and remedy the control weaknesses that allowed improper access to occur. Although the District was maintaining a history log of access activity on the computer system that maintained Fund and other District financial information and was producing standard data set access violation reports, these reports were not targeted to specific actions and the District did not follow up to ensure that violations had been appropriately investigated. In addition, the District had not established a process to identify and investigate failed attempts to gain access to this computer system or suspicious patterns of successful access to sensitive data and resources on this system. OCFO officials told us that SHARE computer center staff had developed and tested programs to produce the types of targeted monitoring reports described above and plan to fully implement a program to routinely identify and investigate unusual or suspicious patterns of access to sensitive computer resources by March 31, 2001. In addition to the access controls described above, there are other important information system general controls that organizations should have in place to ensure the integrity and reliability of data. These controls include policies, procedures, and control techniques to physically protect sensitive computer resources and information, provide appropriate segregation of duties among computer personnel, prevent unauthorized changes to application programs, and ensure the continuation of computer processing operations in case of unexpected interruption. We found weaknesses in each of these areas. The following sections summarize these weaknesses. Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls involve restricting physical access to computer resources, usually by limiting access to the buildings and rooms where these resources are stored. In the District, physical access control measures, such as locks, guards, badges, and alarms (used alone or in combination), are vital to safeguarding critical financial and sensitive personnel information and computer operations from internal and external threats. However, we found weaknesses in physical security controls over computer systems at OCFO’s SHARE computer center, which processes Fund and other District financial, payroll, personnel, and tax applications, and network servers connected to the DPW network. Neither DPW nor OCFO had developed formal procedures for granting and periodically reviewing access to the computer resources they controlled. As a result, staff could be granted access or continue to have access to sensitive network and system computer areas even though their job responsibilities may not warrant this access. For example, we identified 60 District employees and contractors who had been granted access to OCFO’s SHARE computer center without evidence of formal authorization. Likewise, DPW did not have complete or accurate records of which employees were permitted access to the network server room. In addition, OCFO staff could not account for 6 of the 95 cards that permitted access to the SHARE computer center computer room. In addition, neither DPW nor OCFO was adequately controlling access by visitors, such as contractors, to sensitive computer areas. For example, we were able to enter and move about both DPW’s network server room and OCFO’s SHARE computer center, including sensitive areas, without providing identification, signing in, or being escorted. Consequently, employees or intruders with malicious intent might also be able to gain improper access to the SHARE computer center or DPW LAN and disrupt these operations. In October 2000, DPW officials told us that they had corrected the physical security weaknesses we identified. In November 2000, OCFO officials told us that they had developed procedures for controlling access to the computer center. Another fundamental technique for safeguarding programs and data is to segregate the duties and responsibilities of computer personnel to reduce the risk that errors or fraud will occur and go undetected. Incompatible duties that should be separated include application and systems programming, production control, database administration, computer operations, and data security. Once policies and job descriptions that support segregation of duties principles have been developed, it is also important to implement access controls to ensure that employees perform only compatible functions. The District had assigned incompatible duties to certain application and system programmers. For example, some of the 24 application programmers that developed computer programs for the District’s main financial system, SOAR, were also responsible for supporting its operation. To perform these incompatible functions, certain application programmers were granted access to SOAR production programs and data. Further, the District had implemented access controls in a manner that permitted the remaining application programmers, who were not responsible for supporting SOAR operations, to also access SOAR production programs and data—a practice that violates basic segregation of duties principles. Allowing application programmers, especially those who have a detailed understanding of the application, to also modify SOAR production programs and data increases the risk of unauthorized modifications, which could lead to improper payments. In addition, all of the 13 system programmers responsible for maintaining the computer system that processes Fund and other District financial, payroll, personnel, and tax applications were also assigned certain incompatible functions. Some system programmers were also responsible for security administration, while others were also responsible for production control or database administration. Moreover, although each of the 13 system programmers was only responsible for certain incompatible functions, all of the 13 system programmers were granted access privileges that would allow them to also perform security administration, production control, and database administration functions. Allowing system programmers the capability to modify financial and other sensitive data and programs without any compensating controls increases the risks of unauthorized modification of financial information and inappropriate disclosure of sensitive data. In addition, because these individuals had both system and security administrator privileges, they had the ability to eliminate any evidence of their activity in the system. Although District officials told us that they were aware of the potential problems associated with allowing incompatible computer duties to be performed by the same individual, the District had not implemented compensating controls, such as reviewing access activity, to mitigate increased risks. Until the District either restricts individuals from performing incompatible duties or implements compensating controls, Fund and other District financial and sensitive information will face increased risk of inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction, possibly occurring without detection. In November 2000, OCFO officials told us that they had limited the access of application programmers responsible for SOAR development to only read production programs and data. In addition, OCFO staff told us that system programming and security functions had been separated and that a special ID would be established to allow system programmers the access required to perform security functions. These activities would be logged and reviewed to ensure that only authorized activities are performed. It is also important to ensure that only authorized and fully tested application programs are placed in operation. To ensure that changes to application programs are needed, work as intended, and do not result in the loss of data and program integrity, these changes should be documented, authorized, tested, independently reviewed, and implemented by a third party. District policy did not require changes to its main financial system, SOAR, to (1) be approved or reviewed prior to implementation or (2) include guidelines for testing these changes. While SOAR application developers maintained a standardized change request form, these forms did not always include authorizing signatures or evidence of testing and independent review. For example, documentation for about 30 percent of the 26 changes that were made to correct problems with SOAR programs from October 1, 1999, through July 20, 2000, did not indicate that the change had been tested prior to implementation. In addition, documentation for almost 90 percent of these changes did not specify that an independent technical review had occurred. Further, the District had not established procedures for periodically reviewing SOAR programs to ensure that only authorized program changes had been implemented. Without adequate application change controls, the District faces increased risk that unauthorized or inadequately tested programs or modifications to existing programs could be introduced. OCFO officials told us that policies and procedures to ensure that changes to SOAR programs are authorized, tested, independently reviewed, and approved would be implemented by January 2001. In addition, OCFO’s policies will include a requirement to periodically review changes to SOAR programs to ensure that only authorized changes are made. An organization must take steps to ensure that it is adequately prepared to cope with a loss of operational capability due to earthquakes, fires, accidents, sabotage, or any other disruption. An essential element in preparing for such catastrophes is an up-to-date, detailed, and fully tested disaster recovery plan. Such a plan is critical for helping to ensure that information system operations and data, such as financial processing and related records, can be promptly restored in the event of disaster. None of the District organizations we visited had a complete and fully tested disaster recovery plan. For example, DPW had not developed a disaster recovery plan for its LAN. In addition, neither OCTO nor OCFO had developed comprehensive disaster recovery plans for the District WAN or the SHARE computer center, which processes Fund and other District financial systems. Specifically, these OCTO and OCFO disaster recovery plans did not establish disaster recovery teams with specific roles and responsibilities, specify requirements for testing the plan periodically, or institute a process for reviewing and updating the plan based on test results. OCFO’s disaster recovery plan for the SHARE computer center also did not address different types of risks, such as floods, winter storms, or interruptions in power or communications, that could affect the continuity of operations. Furthermore, neither OCTO nor OCFO had fully tested disaster recovery plans for the District WAN or the SHARE computer center, respectively. OCFO did test the recovery of system software at its SHARE computer center in December 1999, but this test did not cover the center’s critical applications or telecommunications. Until the District develops and fully tests comprehensive disaster recovery plans for the DPW LAN, the District WAN, and the SHARE computer center, it will not be assured that computer operations critical to the Fund and other District financial activities can be restored promptly in the event of a disaster or other unintended interruption. OCFO officials told us that they had developed a disaster recovery plan for the SHARE computer center, which will use the District’s Department of Human Resources’ computer center. They stated that this plan will be fully implemented by June 30, 2001. In addition, DPW officials stated that their staff would develop a comprehensive disaster recovery plan for the DPW LAN by April 1, 2002. A key reason for the District’s information system control problems was that it did not have a comprehensive computer security management program in place to ensure that effective controls were established and maintained and that computer security received adequate attention. Our study of security management best practices found that leading organizations manage their information security risks through an ongoing cycle of activities coordinated by a central focal point. This management process involves (1) assessing risk to determine computer security needs, (2) developing and implementing policies and controls that meet these needs, (3) promoting awareness to ensure that risks and responsibilities are understood, and (4) instituting an ongoing program of tests and evaluations to ensure that policies and controls are appropriate and effective. In contrast, the District had not adequately accomplished any of these objectives. The first key problem with the District was that it had not adequately established a central focal point to coordinate computer security management. Due to the interconnectivity of the District’s networks, coordination and guidance provided by a central focal point becomes even more important, since a compromise in a single system could impact all District agencies. According to District law, OCTO was created to (1) centralize responsibility for the District’s information technology investments and (2) develop and enforce policy directives and standards regarding information technology throughout the District government. However, no single District office was overseeing the architecture, operations, configuration, or security of the District’s networks and systems. For example, each of the District’s five data centers remains responsible for operating and securing its own computer environment without sufficient District-wide guidance or oversight. In addition, while OCTO manages and secures the District WAN, other functional units, such as DPW, still manage their own networks. Consequently, security roles and responsibilities were not clearly assigned, security management was not given adequate attention, and no organization was held accountable for security throughout the District. A second key area of computer security management is assessing risk to determine computer security needs. Risk assessments not only help management to determine which controls will most effectively mitigate risks, but also increase the awareness of risks and, thus, generate support for adopted policies and controls. In this regard, it is important for organizations to define a process, which can be adapted to different organizational units, to continually manage computer security risk. However, District policy did not require risk assessments or provide guidance for managing computer security risk on a continuing basis. Consequently, none of the District organizations we visited were adequately managing risk relating to computer security, as evidenced by the serious weaknesses described above. For example, DPW had not performed a risk assessment for its network. In addition, OCTO had not formally assessed computer security risks relating to the District WAN, which could affect all District agencies connected to this network. Further, OCFO was not routinely assessing and managing information security risks associated with its SHARE computer center, which processes Fund and other District financial, payroll, personnel, and tax systems. During the past year, the SHARE computer center had updated its computer hardware, upgraded its operating system software, and installed a new financial management system for the District. Although all of these events should have warranted a risk assessment, OCFO only performed an initial risk assessment for the new financial management system. A third key element of effective security program management is implementing computer security policies and controls that cover all aspects of an organization’s interconnected environment. Our study of security management practices at leading organizations found that current, comprehensive security policies, which cover all aspects of an organization’s interconnected environment, are important because written policies are the primary mechanism by which management communicates its views and requirements. We also reported that organizations should develop both high-level organizational policies, which emphasize fundamental requirements, and more detailed guidance or standards, which describe an approach for implementing policy. Although District law tasks OCTO with coordinating the development of information management plans, standards, systems, and procedures throughout the District government, OCTO had not yet established District-wide guidance for developing and implementing comprehensive computer security policies and controls. This, along with the fact that a central focal point had not been established to oversee computer security throughout the District, has contributed to unclear security roles and responsibilities. In one case, access to the District financial application had been removed for three terminated District employees, but access to the computer system that processes this and other District financial applications, which is maintained by another District organization, had not been disabled. Consequently these terminated employees still had the opportunity to sabotage or impair other District financial operations. In addition, the District had not developed technical standards for implementing security software, maintaining operating system integrity, or controlling sensitive utilities. Such standards would not only help ensure that appropriate information system controls were established consistently throughout the District, but also facilitate periodic reviews of these controls. The establishment of appropriate information system controls was also hindered because security administration and system programming staff were not provided with adequate technical training. Specifically, OCFO security administration staff at the SHARE computer center had not received security awareness training and had only been provided minimal training on the security software used by the District. In addition, OCFO system programmers at the SHARE computer center had not received technical training on important types of system software, such as the tape management system. A fourth key area of security program management is promoting security awareness. Computer attacks and security breakdowns often occur because computer users fail to take appropriate security measures. For this reason, it is vital that employees who use computer systems in their day-to- day operations be aware of the importance and sensitivity of the information they handle as well as the business and legal reasons for maintaining its confidentiality and integrity. In accepting responsibility for security, employees should, for example, devise effective passwords, change them frequently, and protect them from disclosure. In addition, employees should help maintain physical security over their assigned areas. However, none of the District organizations we visited were adequately promoting security awareness to ensure that such risks and responsibilities were understood. Several of the computer security weaknesses we discuss in this report indicate that users were either unaware of or insensitive to the need for important information system controls, such as secure passwords. We also found little evidence that the District had convinced its employees that it was important to prevent unauthorized access to the SHARE computer center and other sensitive computer areas. As discussed above, we were able to bypass physical security measures and enter and move freely about both OCFO’s SHARE computer center and a DPW telecommunications room without detection or challenge. A fifth key element of effective security management is an ongoing program of tests and evaluations to ensure that computer security policies and controls continue to be appropriate and effective. This type of oversight is an essential aspect of security management because it (1) helps the organization take responsibility for its own security program and (2) can help identify and correct problems before they become major concerns. In addition, periodic assessments or reports on security activities can be a valuable means of identifying areas of noncompliance, reminding employees of their responsibilities, and demonstrating management’s commitment to the security program. Our study of security management best practices at leading organizations found that an effective control evaluation program includes processes for (1) monitoring compliance with established information system control policies and guidelines, (2) testing the effectiveness of information system controls, and (3) improving information system controls based on the results of these activities. None of the District organizations we visited had established such a program, which could have allowed the District to identify and correct the types of weaknesses discussed in this report. Until the District establishes a program to periodically evaluate the effectiveness of information system controls, it will not be able to ensure that its computer systems and data are adequately protected from unauthorized access. OCTO officials told us that they recognize the need for enhanced security and to this end, plan to implement a formal security management program by October 1, 2001. This program will include the key elements described in our study of security management best practices. Information system general controls are critical to the District’s ability to ensure the reliability of Fund and other District financial information and maintain the confidentiality of sensitive personnel and tax information. However, the District’s information system control problems placed sensitive personnel and tax information at risk of disclosure, critical financial operations at risk of disruption, and assets at risk of loss. A primary reason for the District’s information system control problems is that it did not have a comprehensive security management program. Comprehensive computer security management programs are appropriate for achieving an effective information system general control environment. Effective implementation of such a program provides for periodically assessing risks, implementing effective controls for restricting access based on job requirements and proactively reviewing access activities, communicating the established policies and controls to those who are responsible for their implementation, and, perhaps most important, evaluating the effectiveness of policies and controls to ensure that they remain appropriate and accomplish their intended purpose. District management stated that it has recognized the seriousness of the weaknesses we identified and expressed its commitment to improving information system controls. We recommend that you direct the Chief Financial Officer, Chief Technology Officer, and the Director of DPW, as appropriate, to take the following actions. Correct the specific access control weaknesses which are summarized in this report and detailed, along with our corresponding recommendations and the District’s corrective action plans, in a separate report designated for “Limited Official Use,” also issued today. Report to you, or your designee, periodically on progress in implementing the corrective action plans described in the separate report designated for “Limited Official Use.” We also recommend that you direct the Chief Technology Officer to ensure that an effective entitywide security management program, as described in this report and in our study of security management best practices at leading organizations, is developed and implemented. Such a program would include establishing a central focal point to manage an ongoing cycle of the following security management activities: assessing risk to determine computer security needs, developing and implementing policies and controls that meet these promoting awareness to ensure that risks and responsibilities are understood, and instituting an ongoing program of tests and evaluations to ensure that policies and controls are appropriate and effective. In commenting on a draft of this report, the District’s Chief Technology Officer agreed with our findings and recommendations and stated that the District is giving the highest priority to correcting the information security weaknesses we identified. The District has developed an action plan to correct all security weaknesses by April 2002. Specifically, the District is making changes to its security software to reduce the risk of unauthorized access and to strengthen information system controls. In addition, the District plans to implement standard software and procedures across the appropriate computer platforms and to establish a team to address information security as part of normal business operations. OCTO also plans to conduct quarterly reviews to monitor the progress in implementing the corrective action plans associated with our recommendations. The District also stated that it recognized that the key to information security is a sound security management program. By October 2001, with OCTO as the central focal point, the District plans to implement a security management program that will include conducting risk assessments, developing and implementing security policies and procedures, promoting awareness, and testing and evaluating controls to ensure that they are effective. This report contains recommendations to you. The head of the District of Columbia Government is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should send your statement to the Senate Committee on Governmental Affairs and the House Committee on Government Reform within 60 days of the date of this report. A written statement must also be sent to the House and Senate Committees on Appropriations with the District’s first request for appropriations made more than 60 days after the date of this report. We are sending copies of this report to Senator Robert C. Byrd, Senator Richard Durbin, Senator Kay Bailey Hutchison, Senator Joseph Lieberman, Senator Ted Stevens, Senator Fred Thompson, Representative Dan Burton, Representative Thomas M. Davis, Representative Ernest J. Istook, Representative James P. Moran, Representative Eleanor Holmes Norton, Representative David R. Obey, Representative Henry A. Waxman, and Representative C.W. Bill Young. We will also send copies to Kenneth R. Wykle, Administrator of the Federal Highway Administration; Natwar Gandhi, Chief Financial Officer of the District of Columbia; Charles Maddox, Inspector General of the District of Columbia; Deborah K. Nichols, District of Columbia Auditor; Leslie Hotaling, Interim Director of the Department of Public Works; Suzanne Peck, Chief Technology Officer; and Alice Rivlin, Chairman of the District of Columbia Financial Responsibility and Management Assistance Authority. If you have any questions or wish to discuss this report, please contact me at (202) 512-3317 or Dave Irvin at (214) 777-5716. Key contributors to this report are listed in appendix II. The following is GAO’s comment on the District of Columbia’s letter dated December 13, 2000. 1. Attachment A is included only in our report designated for “Limited Official Use.” In addition to the person named above, Lon Chin, Debra Conner, Edward Glagola, David Hayes, Sharon Kittrell, Jeffrey Knott, West Coile, Harold Lewis, Tracy Pierson, Norman Poage, and Charles Vrabel made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
GAO reviewed information system general controls over the financial systems that process and account for the financial activities of the District of Columbia's Highway Trust Fund. GAO identified serious computer security weaknesses that place District information at risk of deliberate or inadvertent misuse. These general control problems affected the District's ability to (1) prevent or detect unauthorized changes to sensitive data and (2) control electronic and physical access to confidential information. The District's lack of a comprehensive computer management program was the primary reason for its information system control problems.
The Department of Transportation has four offices that conduct automobile crash tests: three within NHTSA and one in the Federal Highway Administration (FHWA). The activities of two programs run by NHTSA are the focus of this report. NHTSA’s Office of Vehicle Safety Compliance performs a compliance testing program of 30-mile-per-hour full-frontal crashes of automobiles, light trucks, and vans into a fixed rigid barrier. This program was created under section 103 of the National Traffic and Motor Vehicle Safety Act of 1966, and it is designed to ensure that vehicles meet minimum safety requirements as specified in Federal Motor Vehicle Safety Standard No. 208 -Occupant Crash Protection (FMVSS 208). Also under the authority of NHTSA is the New Car Assessment Program (NCAP), conducted by the Office of Market Incentives. This program, mandated under title II of the Motor Vehicle Information and Cost Savings Act of 1972, was created to provide information to consumers on the relative crashworthiness, or safety, of automobiles. This charge differs from the compliance test in that vehicles tested in NCAP are not required to meet specified safety standards, while the purpose of compliance tests is to ensure that vehicles meet a level of safety required by law. The NCAP test also differs from the compliance test in two important aspects: NCAP crashes its vehicles at 35 miles per hour, which translates to over one-third more energy than compliance tests, and NCAP engages all manual and automatic restraints, while the compliance test employs only passive restraints. By using all restraint systems, NCAP assesses the maximum crashworthiness of a vehicle in high-speed frontal crashes. In addition to the two programs described above, NHTSA’s Office of Crashworthiness Research conducts a variety of tests to study a wide range of individual safety issues that arise from specific crash configurations. FHWA conducts crash tests to study the interaction between automobiles and roadside obstacles and devices such as guard rails, telephone poles, and bridge abutments. To respond to your request, we examined data from tests conducted for compliance with FMVSS 208 (compliance tests) as well as those conducted under the New Car Assessment Program. We chose to focus on these programs because both conduct tests that are similarly configured, employ standardized procedures, and have been assessing vehicle crashworthiness over a period of years. The two other crash test programs run by DOT are largely research based and, although important, have different purposes from those of our study. Our analysis consisted of three parts: (1) an examination of trends over time in crash test results of both programs, (2) an assessment of the reliability of NCAP results, and (3) a review of the relationship between NCAP results and real-world traffic injuries and fatalities. We first reviewed the background, sample selection, and testing procedures of both NCAP and the compliance program. (See appendix I). We then examined what it is that crash tests measure, as wellas how well measurement devices used in crash tests simulate human biomechanics and physiological response by reviewing biomechanic, human tolerance, and automotive safety literature and by interviewing experts in those fields. (See appendixes II and III.) Next, we analyzed changes in crash test results by year for both the compliance program and NCAP. (See appendix IV.) To address the reliability of crash test results, that is, the degree to which consistent results are obtained through repeated trials, we examined research conducted by NHTSA and compared NCAP results with those obtained in crash tests conducted by manufacturers. (See appendix V.) Finally, we conducted analyses using two national databases that allowed us to relate real-world fatality rates for drivers with the predicted injury risks derived from NCAP results. (See appendix VI). For this analysis, we used Poisson regressions to assess the relationship between fatality rates, derived from the Fatal Accident Reporting System and the R.L. Polk Vehicle Registration System, and the combined injury risk calculated from the NCAP measurements that assess the potential for skeletal injuries to the head and chest. Analyses were conducted for restrained drivers in one- and two-car frontal crashes. We did not include information from the compliance program in the analyses we conducted on either the reliability or the predictive validity of crash test results. In our assessment of the reliability of crash test results, we did not uncover a quantity of data sufficient enough to compare the results of two or more trials of vehicle models. In the case of the predictive validity of crash test results, we did not use compliance test data for two reasons. First, the compliance program had conducted only 145 tests between 1987 and 1992. Second, the variation among compliance results was relatively narrow and scores tended to cluster far below the ceiling values for the compliance tests. These two items resulted in a dataset that was insufficient for conducting detailed statistical analyses. We conducted our review in accordance with generally accepted government auditing standards. In NCAP crashes, nearly all cars now meet the head and chest injury standards of the compliance tests, although they are 36-percent more violent than compliance crashes. The average probability of sustaining a serious injury in a 35 mile-per-hour crash as measured by NCAP has declined from over 0.5 in 1980 to less than 0.2 in 1993. (See figure 1.) Differences among the crashworthiness scores of vehicles tested in this program have experienced similar declines. The introduction of air bags has contributed significantly to this improvement. (For a complete discussion of the trends in crash test results, see appendix IV.) A causal linkage between improved crash test scores and declining highway fatalities cannot be asserted with certainty because of both the many variables involved in a crash and the increased emphasis on traffic accident and injury prevention over the past decade. Nonetheless, it seems reasonable to conclude that manufacturers’ successful efforts to improve their products’ performance in NHTSA crash tests, particularly in NCAP, have contributed to improved occupant protection in real-world crashes, although we were unable to quantify that contribution. These improvements to performance have derived from a variety of efforts, with two examples being modernized manufacturing techniques and an increased emphasis on safety systems and designs. In addition, in recent years, automotive designers have turned more to computer-based simulations to assist in the design of vehicles that meet crash test standards. Although we did not evaluate the state of the art in computer-based crash models, we learned from industry personnel that such modeling appears to accurately predict the results of actual crash tests. Indeed, one computer specialist informed us that the industry uses crash tests in part to validate their computer models. Although simulated crashes are costly as they currently require access to supercomputers, they do allow the manufacturer to assess the crashworthiness of a vehicle in more trials, more quickly, and at impacts points other than the front (or side) of a car. These benefits over actual crash testing permit the identification of crash forces upon an occupant in a time frame that offers immediate redesign implications. To determine whether the result of any test is reliable, consistent results must be obtained through repeated trials of a specified procedure. In the case of crash tests, this means that consistent results of repeated tests of a specific vehicle model are required. This is particularly crucial when comparing the safety ratings of different vehicles. Both the NCAP and compliance programs generally conduct only one trial of a specific vehicle model; thus, insufficient data exist to accurately define the reliability of crash test results. That is, the ability to predict with confidence the likelihood of a tested model’s receiving similar scores if tested again is low. We found only two sources of information on which to assess the reliability of crash test results: a study conducted by NHTSA in 1984, which examined the variations in test results of 12 consecutively manufactured Chevrolet Citations, and our own analysis of the differences between results for vehicle models tested in NCAP and the results for those vehicles in corresponding tests conducted by automobile manufacturers. Our analysis of the data derived from NHTSA’s 1984 study revealed wide variations in the head injury criterion (HIC) results, the measurement taken to assess potential skeletal head injuries. (See appendix V.) Although NHTSA ascribed the variation in results to a number of sources, including the test itself, it failed to discuss the implications of the combined effect of these sources on crash test results; namely, that even within a specific vehicle line, the result of one test may not be indicative of the model’s performance from trial to trial, and large differences in the resultant HIC may occur. We also examined the differences between the results of NCAP and manufacturers’ tests provided to us by NCAP officials. The tests conducted by the manufacturers essentially duplicated the NCAP test procedures. We compared the results of the two tests using the star rating system recently developed by NCAP, hypothesizing that if the manufacturer test were considered a second trial for a model line, its results should be consistent with the NCAP, or first trial. The star rating system ranks cars from 1 to 5 stars, with 5 stars being the best rating, or safest car, and 1 star being the worst rating, or least safe car. These ratings are based on the risks of serious injury for vehicles, which are calculated from the head injury criterion and chest acceleration scores from NCAP tests. (For a discussion of crash test measurements and the star system, see appendix II.) We found that in only about one-half of the paired comparisons would NCAP- and manufacturer-tested vehicles have received the same star rating. In 32 percent of the comparisons, the results of the second trial would have changed by 1 star, while in 8 percent of the cases, the ratings of the vehicles would have changed by 2 or more stars. When we compared the risks of serious injury (the base unit categorized into the five star ratings) derived from the manufacturer and NCAP data, we found that each star category was associated with a wide band in which the resultant risk scores of subsequent tests might fall. For example, the results of a second test of a vehicle rated as 4 stars by its first test could fall between 5 stars and 2 stars. The analyses described above are based on the only two sources of information we could find. The quantity of data in each analysis was not enough for us to fully quantify the reliability of crash test results; however, we were able to determine that NCAP scores, whether reported in raw HIC and chest acceleration scores or as categories of injury probability, have associated levels of imprecision and that seemingly large differences in crash test results may not necessarily reflect true differences in a vehicle’s safety potential. By not properly defining and publishing the degree of reliability, consumers may be misled into purchasing a vehicle purported to be more crashworthy than another when, in fact, it may be no more safe, or even less safe, than the comparison vehicle. Since NCAP crash tests are designed to simulate full-frontal collisions, we restricted our analysis to those types of crashes and found the results of NCAP crash tests are generally reflected in real-world fatality rates. That is, on the whole, a statistically significant relationship exists between real-world highway fatality rates associated with vehicles tested in the NCAP program and their scores in crash tests. However, we concluded that this relationship derives mainly from the high fatality rates of vehicles with the worst NCAP scores. When we divided vehicles into NCAP score quintiles—that is, placed the vehicles into one of five 20-percentile categories based on their location in the distribution of NCAP results—we found that the quintile with the worst NCAP scores (those vehicles in the highest 20-percentile category) had significantly higher fatality rates than the remaining 80 percent of NCAP-tested vehicles. The remaining four quintile categories, however, had associated fatality rates that were not significantly different from one another. (See figure 2 and appendix VI.) Over time, the mean risk of injury in frontal crashes, as measured by NHTSA crash tests, has declined and indeed has mirrored a similar trend in the annual number of highway fatalities. While we cannot state with certainty that NHTSA crash tests are a causal factor in improved crashworthiness, we believe that efforts on the part of automobile manufacturers to produce vehicles that score well on these tests have contributed to the improvement of the overall safety of vehicles. At the very least, the results of NCAP and compliance tests provide indications that the vehicle fleet, on the whole, has become safer over the past 15 years. These trends in the mean score of crash tests, however, do not necessarily suggest that individual vehicles have well-defined levels of safety, nor do they suggest that the relative rankings of two vehicles would be the same if subsequent trials were conducted. They also do not suggest that differing test results are reflected in data derived from real-world traffic collisions. Indeed, only the poorest performers in NCAP had associated fatality rates that were significantly different than other NCAP vehicles. On the basis of our findings, we make two recommendations to the Administrator of NHTSA. First, we recommend that information on NCAP reliability be updated and made available, in clear language, to the general public. Such an effort would require an update of the repeatability study the agency conducted in 1984 and could result not only in a better understanding of the reliability of crash tests for predicting injury risk, but also in discovering ways in which NHTSA can limit the error that derives from sources under its control. We also recommend that NHTSA explore the feasibility of alternative means of testing the crashworthiness of new vehicles. Computer simulations may provide one such alternative. It may be possible to better assess the safety potential of a vehicle through computer-based modeling as this allows more trials, more quickly, and modeling is capable of simulating impacts at all points of a vehicle. In addition, this rapidly emerging technology has the added capability of providing immediate insights into redesigning vehicles in which the crashworthiness may not yet be optimal. We received written comments on a draft of this report from the Department of Transportation. The Department concurred with our recommendation that it update its information on NCAP reliability. We are concerned, however, that the agency might believe it has already complied with this recommendation by developing the star rating format. As noted above and explained in detail in appendix V, this new format does not resolve our questions concerning NCAP reliability. The Department interpreted our second recommendation as a recommendation to augment or replace “live” crash tests with computer simulations and did not concur with us. The Department cited concerns about the costs and predictive limitations of such simulations. We share these concerns, but we believe the Agency has misinterpreted the recommendation. We avoided recommending the adoption of any particular substitute for the current crash test procedures at this time. Rather, we urged the Agency to explore all possible means of reliably defining vehicle crashworthiness. Computer modeling is a potential alternative that deserves exploration and monitoring as the technology matures. Other alternatives could include extending testing programs to include side, rear, and frontal-offset impacts to gain a better understanding of the total safety of a vehicle or seeking greater sharing of crash test data developed by automotive manufacturers either through crash tests replicating NHTSA’s or through their individual component testing programs. The Department provided a number of other specific comments. They are reproduced in appendix VII, together with our response. We have also made modifications to the report as we deemed appropriate on the basis of these comments. After responding to our draft report, the Department also provided us with additional data relevant to NCAP reliability. The results of our analysis of these data can be found in appendix V. We are sending copies of this report to the Secretary of Transportation, the Administrator of the National Highway Traffic Safety Administration, and to other interested parties. We will also make copies available to others upon request. If you have any questions or would like additional information, please call me at (202) 512-3092. Major contributors to this report are listed in appendix VIII. The Department of Transportation has four offices that conduct automobile crash tests: three within NHTSA and one in the Federal Highway Administration. A compliance test program conducted by NHTSA’s Office of Vehicle Safety Compliance consists of full-frontal crashes of automobiles, light trucks, and vans into a fixed rigid barrier to ensure that vehicles meet certain minimum safety requirements. Also under the authority of NHTSA is the New Car Assessment Program, conducted by the Office of Market Incentives. NCAP tests are similar to compliance tests, but they are performed to provide consumer information on the relative crashworthiness of automobiles. The NHTSA Office of Crashworthiness Research conducts a variety of tests to study a wide range of individual safety issues that arise from specific crash configurations. Finally, FHWA conducts crash tests to study the interaction between automobiles and roadside obstacles and devices such as guard rails, telephone poles, and bridge abutments. In this study, we focused on the crash tests run under the compliance program and the New Car Assessment Program because both tests are similarly configured, employ standardized procedures, and have been assessing vehicle crashworthiness over a period of years. The tests conducted by the Office of Crashworthiness Research and those of FHWA, although important, have different purposes from those of our study and could not provide a quantity of data sufficient for us to assess relationships between test results and real-world performance. “to reduce the number of deaths of vehicle occupants, and the severity of injuries, by specifying vehicle crashworthiness requirements in terms of forces and accelerations measured on anthropomorphic dummies, and by specifying equipment requirements for active and passive systems.” These crash tests are conducted under the guidance of NHTSA’s Office of Vehicle Safety Compliance. The current compliance program relies, for the most part, on a certification process in which the manufacturer of a specific make and model vehicle states that the vehicle meets all safety requirements set forth in FMVSS 208. In addition, each year NHTSA selects a number of vehicles to test to ensure that the manufacturer’s certification is justified. The criteria used to determine which specific makes and models to test are based on whether a vehicle is in its first or second model year, whether safety features have been added or redesigned, and how many units are on the road. In selecting models for testing, NHTSA also includes any evidence of poor crashworthiness derived either from consumer complaints filed about specific models or from other crash test programs (in particular, NCAP). Through these criteria, NHTSA compiles a preliminary list of about 50 candidate vehicles for testing and requests information on crash test performance from the manufacturer of each candidate model to determine the final list of vehicles to be tested. Though they are under no obligation to do so, manufacturers will normally provide one or two sets of results from their tests of the model NHTSA specifies. NHTSA uses these data not only as an input for determining the final list of test vehicles, but also as a baseline with which to compare its own results. The compliance test consists of a full-frontal collision of a vehicle into a fixed rigid barrier at a velocity of 30 miles per hour. Anthropomorphic test dummies, fitted with instrumentation to measure forces and accelerations acting on the head, chest, and both femurs, are placed in the driver and front passenger seats. Only passive restraint systems—those that require no effort on the part of an occupant—are engaged. Examples of these are air bags and automatic seat belts. Seat belts that require active participation by the occupant are not used. The underlying assumption is that if a vehicle meets the standards for those occupants who do not make use of all available safety restraint systems, it will also meet the requirements for those who do. The test conditions further specify the forward placement of the seat, the angle of the seat back, the angle of the steering column (where the vehicle has tilt steering), and a number of other components. Some of these, such as adjustable backs for seats, are placed in the manufacturer’s nominal design riding position—that is, the position the manufacturer says is the proper one for the average adult male (5 feet 9 inches, 167 pounds). A second crash test program we studied is the New Car Assessment Program conducted by NHTSA’s Office of Market Incentives. This program was mandated under title II of the Motor Vehicle Information and Cost Savings Act of 1972 to provide consumers with an understanding of the relative crashworthiness of passenger motor vehicles. Since 1979, NCAP has conducted almost 500 crash tests of passenger cars, light trucks, and vans. From 1979 to 1986, NCAP was considered an indicant test for vehicle compliance with Federal Motor Vehicle Safety Standards 212, Windshield Mounting; 219, Windshield Zone Intrusion; and 301, Fuel System Integrity. That is, if a vehicle performed reasonably well on these tests, which required dynamic testing, then it would likely meet compliance test requirements because the NCAP test involves a more violent crash than the one required for the compliance test. If a vehicle performed poorly, the information would be transmitted to the Office of Vehicle Safety Compliance for testing its compliance with the safety standards. NCAP has not been an indicant program since the implementation of dynamic crash tests in the FMVSS 208 program in 1987; however, poor performance on the NCAP test typically leads to compliance testing of the same model. The NCAP crash test is generally similar to the compliance test. Both are full-frontal collisions into a fixed rigid barrier, and both use roughly the same criteria when determining which vehicles to test. However, three very important differences distinguish the two test programs. First, vehicles in the NCAP test are crashed at 35 miles per hour rather than 30 miles per hour, the velocity in the compliance test. This 5 mile-per-hour difference results in a 36-percent increase in the amount of energy in the system. = 1/2 mv). The additional energy in the NCAP test over the compliance test derives from the square of the velocity when the mass of the vehicle is held constant. Thus (35 miles per hour). Second, all active as well as passive safety belts in the automobile are used in the NCAP test; that is, the test dummies are restrained by any manual seat belt furnished with the vehicle as well as any automatic belt or air bag. In the compliance test, as noted earlier, only passive restraints (automatic belts and air bags) are used. The third and foremost difference between the two programs is the underlying purpose of the tests. NCAP is a market-based program that disseminates information to consumers on the relative safety of passenger vehicles. There are no minimum allowable safety performance criteria that vehicles must meet, although NCAP collects the same measurements as the compliance test. Despite the fact that NCAP is not a compliance program, industry personnel have expressed the opinion that the NCAP test has become the de facto regulation. That is, failure to meet compliance levels on this more stringent test involving a more forceful collision than the official compliance test could imply that a vehicle is unsafe. Currently, nearly all vehicles tested under NCAP meet the safety requirements specified in FMVSS 208. Both the compliance and NCAP tests use anthropomorphic test dummies to collect data related to injury potential by measuring accelerations and forces placed on an occupant’s head, chest, and upper leg. Specific levels for each measure, established under FMVSS 208, represent upper-bound limits for compliance with vehicle safety requirements. These ceilings were designed to correspond to the level at which there is a one-in-six chance of an occupant’s sustaining an injury that poses a serious threat to life. The head injury criterion, the measure used in crash tests to assess potential head injury, was adopted by NHTSA on the basis of research conducted to establish the likelihood of skull fractures under different velocity changes. HIC is measured as a composite of the axial accelerations of the head (in three dimensions). Specifically, HIC is the product of (1) the 2.5 power of the average of the resultant head acceleration over a time interval not more than 36 milliseconds and (2) that time interval. The equation for the function is HIC [1/(tt). A HIC score of 1,000, the highest allowable score for achieving vehicle compliance, is associated with a one-in-six chance of sustaining a serious skull injury. For determining potential injury to the chest region, chest acceleration is measured in gravitational units (g’s). manufacturer of the test vehicle. If the Hybrid III is used in a compliance test and the vehicle exceeds the 3-inch maximum reduction distance allowed (the limit associated with major lacerations to the spleen or kidneys), the vehicle is considered not to be in compliance with FMVSS 208. The final measure taken in both the compliance and NCAP tests is the compressive force transmitted axially through the upper legs (femurs). The femur tolerance level of 2,250 pounds of force is based primarily on experimental impacts to the lower limbs and is associated with a one-in-six chance of sustaining a fracture to that bone. When the results of a compliance test exceed the limit for any of the measures, an investigation is conducted to determine reasons for the failure and is typically accompanied by a recall or remedy campaign. If a determination of noncompliance is made, the model being tested may not be sold in the United States. This differs from NCAP as its tests are not conducted to assess vehicle compliance with federal regulations, and therefore, no punitive actions may be taken by NHTSA should a vehicle exceed any of the limits. Table II.1 lists the four measurements made in both test programs and presents the maximum allowable scores under compliance testing for each measure. In 1978 (for the 1979 model year), NHTSA began testing about 30 vehicles per year through its New Car Assessment Program. While no manufacturer is required to exceed 30 mile-per-hour standards, the program, using a 35 mile-per-hour crash test, is designed to inform customers of the relative crashworthiness of an automobile. Traditionally, NCAP reported the actual HIC, chest acceleration, and femur load scores with a disclaimer that only vehicles within 500 pounds of each other could legitimately be compared. Also, NCAP would cite the compliance ceiling levels (1,000 HIC, 60-g chest acceleration, and 2,250-pound femur load) as representing a one-in-six chance of sustaining a severe injury. Despite NHTSA’s claim of overall success in providing information about how well or how poorly passenger vehicles protect their occupants in crashes, some critics argued that NCAP’s method of reporting test results left consumers confused. In response to fiscal year 1992 Senate Appropriations Committee requirements, NHTSA performed a user study and began implementing new methods of informing consumers of the comparative levels of the safety of passenger vehicles as measured by NCAP. This new method, a star chart rating system, is designed to provide consumers with a quick, simplified, single point of comparison to evaluate vehicles in the NCAP test. Based upon analyses of a variety of accident injury studies, NHTSA developed a scale, known as the “Level of Protection Scale,” that relates the probability of sustaining an injury to the level of protection a vehicle provides its occupants from receiving such an injury. This scale forms the basis of NHTSA’s star chart method for releasing NCAP test results to the public. The star chart, which NHTSA began using in December 1993, reports a range of 1 to 5 stars, with 5 stars indicating the best crash protection for vehicles within the same weight class. The number of stars a vehicle may be rated is derived from the injury probabilities associated with the HIC and chest g scores obtained in the crash tests. These probabilities are calculated using the following formulas: Phead = [1 + exp(5.02 - 0.00351 x HIC)]-1 Pcombined = Phead + Pchest - (Phead x Pchest) A vehicle is then assigned a star rating based on its combined injury risk, with the specific number of stars determined by the range in which the combined injury risk lies. The ranges for each star rating are shown in table II.2. “are intended to describe a measuring tool with sufficient precision to give repetitive and correlative results under similar test conditions and to reflect adequately the protective performance of a vehicle or item of motor vehicle equipment with respect to human occupants.” In this appendix, we discuss the characteristics and instrumentation of the Hybrid II and Hybrid III 50th-percentile anthropomorphic test dummies, under the provisions of the NHTSA standards pertaining to occupant crash protection. We also provide a comparison of the two dummy types’ performance in NCAP and the compliance test programs. Finally, we summarize the 1993 decision to standardize the test dummy, requiring the mandatory use of the Hybrid III in all NHTSA crash test programs beginning in 1997. In both the compliance and NCAP test, the manufacturer of the vehicle being tested has the option to choose which type of dummy will be used. While both dummies are designed to represent the physical characteristics of the average adult male, important differences between them exist. Despite the differences, the requirements for vehicular conformance to FMVSS 208 are not different for the two instruments, with the exception of the chest compression criterion, which applies only when the Hybrid III dummy is used. Part 572 of the Federal Motor Vehicle Safety Standards specifies the types of anthropomorphic dummies to be used in the FMVSS 208 compliance test. Currently, two specific types of anthropomorphic test dummies may be used in a compliance crash test: the Hybrid II and the Hybrid III. As specified in subpart B of 49 C.F.R. part 572, since 1973 the Hybrid II 50th-percentile male test dummy is 5 feet 9 inches tall and weighs approximately 164 pounds, and until 1986, this dummy was used when determining compliance to FMVSS 208. In 1986, 49 C.F.R. parts 571 and 572 were amended to adopt the Hybrid III 50th-percentile dummy as an alternative to the Hybrid II for FMVSS testing. This gave manufacturers the option of using either the Hybrid II or Hybrid III test dummy as the means of determining a vehicle’s conformance to NHTSA’s performance requirements. Like its predecessor, the Hybrid III is 5 feet 9 inches tall but weighs slightly more (167 pounds). Also, like the Hybrid II, each Hybrid III used in a compliance test must meet the specifications and performance criteria of part 572 before and after each vehicle test in order to be an acceptable compliance tool. The Hybrid II and Hybrid III use the same instrumentation in the head, chest, and femurs. (See figure III.1.) However, according to General Motors, developer of the Hybrid III, its 50th-percentile male dummy was designed to improve on Hybrid II technology and biofidelity. Most experts regard the Hybrid III test dummy as more biofidelic than the Hybrid II, having a more human-like seated posture, as well as head, neck, chest, and lumbar spine designs. The Hybrid III’s responses to crash conditions more closely approximate the motions associated with human anatomy in crash situations and, therefore, more accurately evaluate injury risks. For example, the improved flexibility of the Hybrid III’s neck over the Hybrid II allows researchers a greater ability to assess the injury potential of whipping motions. In addition to the greater biofidelity, the experts we interviewed stated that the Hybrid III is more sophisticated technologically than the Hybrid II because it has more instrumentation for measuring potential injuries. Specifically, the Hybrid III is capable of measuring nearly four times as many forces and accelerations throughout the body as the Hybrid II. For example, not only does the Hybrid III measure injury potential to the skeletal structures, but it can also determine injury potential to the soft tissues in the upper thorax through the chest compression measure. Further, the Hybrid III has accelerometers and load cells placed in the neck and lower legs that can measure the potential for injuries caused to those anatomical areas. To date, no criteria have been established for meeting compliance for the additional measures other than chest compression. However, the measures do provide DOT with additional information on the potential physiological responses associated with vehicular crashes. Currently, the determination of which dummy to use in a test is made by the manufacturer of the vehicle being tested. Through 1993, manufacturers chose to use the Hybrid III in 36 of the 133 tests of passenger cars in the compliance program and in 30 of the 86 tests of passenger cars in NCAP since the dummy became available for use in the two programs (1988 and 1990, respectively). One expert hypothesized that the reason so few compliance tests involve Hybrid III dummies is that they tend to receive higher HIC scores, especially in noncontact situations and that there is no guarantee that a car designed around the Hybrid II will pass a test using a Hybrid III. The differences between the dummies used in NHTSA’s tests, described above, led us to compare the driver-side HIC and chest acceleration scores from passenger-car compliance and NCAP tests in an attempt to quantify the potential effects on test reliability such differences could create. In this analysis, we controlled for the presence of a driver-side air bag in the car. In general, we found the Hybrid III dummy scores were lower than the Hybrid II scores, but that the presence of air bags strongly affects the relative performance of the two dummy types. (See tables III.1 and III.2.) Chest acceleration (g’s) For passenger cars from 1990 to 1993, with and without air bags. Specifically, we found that Vehicles tested with Hybrid III dummies had lower HIC and chest acceleration scores than those tested with Hybrid II dummies in both compliance and NCAP tests. In NCAP tests, Hybrid III dummies averaged 156 HIC and 3.6 g’s less than Hybrid II dummies. (See table III.1) Similarly, in compliance tests, the mean head injury criterion score for cars tested with Hybrid III dummies was 97 HIC lower than the score for tests that used the Hybrid II, while the mean chest acceleration score was about 2 g’s less for test cars that used the Hybrid III. (See table III.2)In both the NCAP and compliance tests, Hybrid III had significantly lower HIC scores than Hybrid II dummies in vehicles equipped with air bags. In vehicles without air bags, Hybrid IIIs had significantly higher HIC scores than Hybrid IIs. The difference could occur because of the greater flexibility of the Hybrid III’s neck.In both the NCAP and compliance tests, Hybrid III dummies had significantly lower chest acceleration results than Hybrid II dummies in cars with air bags. There was little difference between the chest scores of Hybrid III and Hybrid II dummies in cars without air bags. Chest acceleration (g’s) For passenger cars from 1988 to 1993, with and without air bags. Manufacturers have been reluctant to use Hybrid III dummies for tests of cars not equipped with air bags because these dummies tend to produce higher HIC results, especially in cases where the dummy’s head did not contact the interior components of the car. (Only 18 percent of compliance tests of cars without air bags from 1988 to 1993 and 18 percent of NCAP tests of cars without air bags from 1990 to 1993 used the Hybrid III.) Industry representatives stated that because HIC was developed to determine potential skull injuries—a condition that will not occur if the head does not contact the vehicle’s interior—it should be applied only to cases in which the head actually makes contact. Although they agree that brain injuries can occur when the head does not contact the interior, they contend that the instrumentation in the dummy’s head does not measure the potential for these types of injuries. Therefore, they conclude that in cases of “noncontact” HICs, the results are meaningless and misleading. Thus, rather than risk a spuriously higher HIC score in either the compliance or NCAP tests, manufacturers have tended to use the Hybrid II for vehicles that do not have air bags. While these complex interactions of dummy type, safety equipment, and test conditions can be explained by biomechanical differences between the Hybrid II and Hybrid III dummies, they may also be explained by differences in the test vehicles themselves. As we noted earlier, the manufacturers specify which dummy type to use in NHTSA crash tests, and one may assume that, in the absence of other motivations, they would choose the dummy they anticipate will yield more favorable results. As noted above, each manufacturer undergoing a compliance test may specify either the Hybrid II or the Hybrid III test device. But in recent years, NHTSA has become more convinced that using the Hybrid III will help ensure that all new vehicles are designed with the benefit of the most human-like test dummy available. NHTSA regards the Hybrid III as more representative of human responses in frontal crashes, and it can monitor more types of potential injuries as well. Further, NHTSA has come to recognize that exclusive use of the Hybrid III for compliance testing under FMVSS 208 would result in greater comparability of test results among vehicles produced by different manufacturers. For these reasons, the agency recently issued a Notice of Final Rule that requires the exclusive use of the Hybrid III for all compliance testing under standard no. 208. The final rule takes effect September 1, 1997, to coincide with the date at which all passenger cars and 80 percent of light trucks must be equipped with air bags and all light trucks must have passive (automatic) restraint systems. NCAP will also switch to exclusive use of the Hybrid III test dummy beginning with the 1996 model year. These modifications to the two programs will create a greater degree of standardization of crash tests, thereby, in NHTSA’s view, increasing the “comparability of test results among vehicles produced by different manufacturers, particularly those that now use different dummy types.” We conducted analyses of the trends in NCAP test scores from 1979 through 1993 and found that scores have both improved and become more uniform during the period. We have expressed NCAP results in terms of the combined injury risk scores to which NCAP now translates its HIC and chest scores to produce its new “star system”. Figure IV.1 shows the mean injury risks for the driver position, by year, for model years 1979 through 1993. The mean combined injury risk decreased significantly from a high of 0.507 in 1980 to a low of 0.190 in 1993. The figure also indicates that the significant reduction in the combined risk derives from a significant and consistent decrease in the mean head injury risk probability. While the mean chest injury risk declined significantly during the period, it has been relatively stable since 1983. The variation between the individual test results has also decreased over the years. For example, NCAP head injury criterion scores for vehicles in 1979 ranged from 521 HIC to 4,513 HIC, whereas in 1993, the range was between 273 HIC and 1,459 HIC. One reason for the decline in the mean combined injury risk and its accompanying variation over time is the increasingly widespread installation of air bags. Cars equipped with air bags had significantly lower head injury risk probabilities than cars without air bags. (See figure IV.2.) Since the first NCAP test of cars equipped with air bags in 1987, these vehicles have scored an average head injury risk of 0.063, while cars without air bags have averaged 0.216. There is little difference, however, between the mean chest injury risks for passenger cars equipped with air bags (0.108) and those that did not employ this type of restraint (0.120). (See figure IV.3.) Given the relatively flat chest injury risk shown in figure IV.1, it appears that this risk factor, regardless of the type of restraint, has contributed little to the declining trend for the combined injury risk. We also conducted similar analyses for passenger cars tested in the compliance program. Despite fluctuations from year to year, the combined injury risk did not change significantly from 1987 to 1993. (See figure IV.4.) For all passenger cars. Does not include light trucks and vans. During the same period a steady, though not statistically significant, decline in the mean head injury risk was offset by a significant increase in the mean chest injury risk. These opposing trends are associated with the increased installation of air bags, which are associated with lower head injury risk probabilities and higher chest injury risk probabilities for compliance tests. (See figures IV.5 and IV.6.) The contrasting chest injury risk results between NCAP and compliance programs may have occurred because of the differences in the configuration of the two tests—largely from the determination of which restraint systems are used. In an NCAP crash that makes use of all available passive and manual restraint systems, the seat belt absorbs the dummy’s kinetic energy over a gradual period (for a crash event) before the dummy contacts the air bag. However, cars with air bags are not required to have automatic seat belts. And since manual seat belts are not engaged in compliance tests, the dummy in the driver position is not likely to be restrained by a safety belt in compliance tests of cars with air bags. The dummy, therefore, is likely to move forward without a reduction in its kinetic energy, resulting in a more forceful collision with the air bag than if a seat belt was also in use. Over time, as more of the vehicles tested in the compliance program came equipped with air bags, the mean compliance chest score increased. This is not to say that an air bag-equipped vehicle is less safe than one that does not have an air bag. Indeed, the chest g results of cars with air bags may not be directly comparable to those without the devices because the distribution of the force loading on the chest is different for air bags than for safety belts. Air bags distribute the load caused by chest contact across a larger surface area than safety belts. Nevertheless, the higher chest g result for an air bag-equipped vehicle is consistent with the view held by many traffic safety experts that safety belts alone (that is, without air bags) are much more effective than airbags alone (that is, without safety belts). The decreasing mean crash test results parallel a similar trend with annual fatalities, and part of that latter trend can rightfully be attributed to NHTSA crash tests. In discussions with industry representatives, we found that automobile manufacturers attempt to design vehicles to meet compliance levels for frontal collisions in the NCAP test, as well as ensure that all other safety criteria are met. In addition to “live” crash tests, manufacturers use computer models of frontal collisions, rear and side impacts, and roof crush to simulate NHTSA crash tests and to ensure that the cars meet NHTSA standards. The types of simulations range from models of specific components of the vehicle to nonlinear finite-element models, which incorporate all specifications of the automobile and can predict interactions between the car and its occupants during a collision. These simulations allow manufacturers to gain insight into the deformation of the vehicle, likely intrusions into the occupant compartment, and the force loads generated by various structural components. Though computer- simulated crashes are expensive because they generally require access to a supercomputer, they do allow manufacturers to gain knowledge of how the car will perform and how to correct problems before building prototypes. They are also much less costly and less time-consuming than building and crashing prototypes. In addition, one simulation expert stated that the results of finite-element simulations generally reflect the results obtained in actual crash tests and that the industry uses crash tests in part to validate its computer-based crash models. Our analyses of the trends in NCAP test results have shown that (1) injury risk probabilities have declined over time and (2) the variation between test results has lessened over time. (See appendix IV.) It would seem that cars have become more crashworthy—at least as measured by NCAP—and that this improved crashworthiness is more uniformly distributed across the passenger car fleet. Indeed, in 1979 the combined injury risk for NCAP-tested vehicles ranged from 0.106 to 1.0 (rounded), whereas in 1993, the ends of the distribution ranged from 0.096 to 0.581. Despite the decreased variation in test results, however, the difference between the highest and lowest risk probabilities is still substantial. This variation is open to two quite different interpretations. It may indicate the sensitivity of crash tests to real differences between vehicle models, or it may reflect the imprecision of the test scores. In classical measurement theory, reliability is defined as the repeatability of test results. The reliability of crash tests would be estimated by comparing the results from repeated crash tests of the same model vehicle. On only one occasion has NHTSA attempted to determine whether NCAP crash test results are, in fact, reproducible by crashing a single model on multiple occasions. In this study, 12 consecutively manufactured 1982 Chevrolet Citations were crash-tested by three test facilities, with each facility testing four vehicles, in an attempt “to quantify the degree of variation, as well as develop generalized statistical conclusions about test repeatability.” The mean HIC score for the 12 tests was 685, with the scores ranging from 495 HIC to 954 HIC. NHTSA identified several sources of variation in results derived from the test procedure, as well as from the testing facilities, the test instrumentation, the test dummy used, and the individual vehicles. NHTSA could not quantify the amount of variation attributable to each of these five areas because of the number of possible sources of error within each. Although the amount of variation that can be attributed to any of the sources of error is incalculable, the confounding interactions of accumulated error lead to questions about the reliability of NCAP results. The variation between different units was artificially constrained by selecting 12 consecutively manufactured Chevrolet Citations, yet the head injury results still had a range of 459 HIC. This variation among the HIC scores implies that two scores with a range of less than 219 are not, in statistical terms, significantly different and that any score between 781 HIC and 1,219 HIC is not significantly different from 1,000. Table V.1 illustrates how this level of reliability could affect the interpretation of other crash test scores. The table displays the mean HIC score from the 12 Citation crash tests and the HIC scores of four other vehicles of similar weight. It also indicates whether the NCAP HIC scores are significantly different from (1) the mean Citation score and (2) the putative ceiling of 1,000. The HIC scores received by all the vehicles except the 1990 Lexus are significantly lower than 1,000. However, there is no statistically significant difference between the mean HIC score received by the Citation and three of the four other vehicles. Curb weight (pounds) From NHTSA’s 12 1982 Chevrolet Citation tests and NCAP results for vehicles of similar weight. Does not apply. After reviewing a draft of this report, NHTSA provided us with a second set of data that could shed additional light on the reliability of NCAP scores. These data represent the results of crash tests of model-year 1991 through 1994 vehicles conducted by automobile manufacturers in tests that essentially duplicate the NCAP test conditions. The data were voluntarily submitted to NHTSA before planned NCAP tests. We compared the manufacturer scores with those obtained from NCAP after translating them into the single injury probability score that serves as the basis for NHTSA’s recently introduced star rating system. (See appendix II.) We found that a statistically significant first-order correlation exists (r = .72) between the two sets of injury risk probabilities. We then compared the distributions of star ratings derived from NCAP and manufacturers’ tests. Table V.2 compares the star ratings for the driver position, and table V.3, for the passenger position. If agreement between the two sets of tests had been perfect, all events in the tables would have fallen on the diagonals from upper left to lower right. In actuality, star ratings are the same for approximately one-half of the vehicle models tested (55 percent for the driver position and 45 percent for the passenger position). Differences of one or more stars exist between manufacturer and NCAP ratings for about one-half of the tests; 8 percent of the vehicle models have differences of two or more stars. As appendix II explains, each of NHTSA’s star ratings represents a range of injury probability. For example, a rating of 4 stars indicates that in a crash situation similar to that tested by NHTSA the probability of serious injury to an individual is between 1 in 10 and 2 in 10. The solid bars in figure V.1 depict these probability ranges for each star rating. The lines attached to the bars represent “confidence intervals” that we estimated from the standard deviation of the absolute difference between the combined injury risks for drivers derived from manufacturer and NCAP tests. These confidence intervals represent the estimated range of injury probability within which a vehicle with a nominal rating could be expected to vary if tested again. For example, a 4-star rating could be associated with a vehicle with a “true” injury probability between zero and 0.363. This range overlaps the confidence intervals associated with the 5- and 2-star ratings. Our estimates of the reliability of NCAP crash tests are based on the only two sources of relevant information we are aware of: the repeated crashes of the 1982 Chevrolet Citation, and a comparison of manufacturer and NCAP test scores from 1991 to 1994. Neither of these sources provides ideal information for precisely quantifying the measurement error associated with NCAP scores. We do not know how well the results of the Citation experiment can be applied to vehicles manufactured and crash-tested 10 years later. While the manufacturer-NCAP comparison applies to a large number of late-model cars, we cannot be sure how well the manufacturers succeeded in replicating NCAP crash test conditions in each case or to what extent the results from other manufacturer tests varied from the ones reported to NHTSA. Nevertheless, both analyses support the same conclusion that NCAP scores, whether reported in raw HIC and chest acceleration scores or as categories of injury probability, have associated levels of imprecision. As a result, substantial differences in scores between two test results (100-200 HIC, or a 1-star—or possibly 2-star—rating difference) may not represent true differences in crashworthiness. The overall purpose of both the compliance and NCAP crash tests is to determine the crashworthiness, or safety, of passenger vehicles. This implies, therefore, that a relationship exists between the results of crash tests and real-world injuries and fatalities. To examine this issue, we conducted two analyses comparing results derived from NCAP tests to those from national accident databases. Specifically, our analysis compared NCAP results with traffic injury and fatality information from the National Accident Sampling System (NASS) and the Fatal Accident Reporting System (FARS). This appendix details the methodologies and results of both analyses. Data from the National Accident Sampling System for 1988 through 1991 were combined to determine whether the results of New Car Assessment Program tests are good predictors of serious injuries and fatalities in real-world automobile crashes. The NASS is a sample of annual police-reported accidents involving passenger cars, light trucks, and vans that had to be towed because they were damaged. The NASS year corresponds to the calendar year rather than the automobile industry’s model year, and emphasis is placed on the most recent 5 model-year vehicles. We chose this data system for two reasons: (1) It is a national database that contains information on all types of automobile collisions, and (2) it is the only national database that reports a vehicle’s change in velocity (delta v)—the best available indicator of accident severity—resulting from the collision. We reduced the NASS data sets to single-vehicle and two-vehicle accidents and then combined them into one data set. This resulted in a total of 14,253 vehicles in the final data set. We then matched the results of the NCAP crash tests to the vehicles in our NASS file. NCAP results from 1983 through 1992 were chosen for the analysis for two reasons: (1) We assumed that crash test results are applicable for a given time (as cars age, their crashworthiness may decrease owing to wear and tear) and (2) NASS data were available for the calendar years 1988 through 1991. Because NASS emphasizes for inclusion collisions involving vehicles from the 5 most recent model years, we chose 5 years as the period of time at which a test score no longer applies; therefore, NCAP scores from 1983 were the earliest we included in the analysis. In addition, each NASS year has a half-year’s data from the following model year, as the model year usually begins in late summer. Therefore, the 1991 NASS had some accidents involving 1992 model-year automobiles. The NCAP results were matched to the NASS data set on the following criteria: the make of the vehicle (that is, the manufacturer), the model, the model year, and the body type (sedan, convertible, and so forth). In cases in which a specific make, model, model year, and body type were tested on more than one occasion, only the first test was used. Results for models with corporate twins (vehicles with platforms identical to one another but sold under different model names—for example, Ford Taurus and Mercury Sable) were projected to the twins. The resultant data set for our analyses contained 1,985 cases. When weighted by NASS sampling weights, these represented more than 9 million accidents. We conducted logistic regression analyses to determine whether a relationship exists between serious injuries and fatalities in actual automobile collisions and results from crash tests conducted for NCAP. We analyzed crashes in which damage to the right front, left front, or full front of the vehicle occurred. Single-car and two-car crashes were examined separately. We conducted two sets of analyses: one on the unweighted sample in our dataset and a second, which incorporated the NASS sampling weights. Our analyses of the relationship between real-world traffic injuries and fatalities and NCAP injury risk probabilities were limited to drivers of passenger cars (two-door and four-door sedans, coupes, hatchbacks, and convertibles). We restricted our analyses to “restrained” drivers—that is, drivers who made proper use of either a manual or an automatic seat belt or whose air bag deployed during the crash. The dependent variable used in the analyses was constructed from NASS injury codes to represent whether the driver of an NCAP passenger car involved in a crash either died or was hospitalized for at least 1 day specifically because of the crash. This was coded as a dichotomous variable, with those who died or were hospitalized receiving a 1 and all other nonmissing values receiving a zero. The independent variable of interest was the combined injury risk score associated with specific vehicle models as derived from the HIC and chest acceleration scores from NCAP tests. (See appendix II.) However, because characteristics of the driver and the vehicle and, most importantly, the severity of the crash (as measured by the total change in velocity, or delta v) are associated with the likelihood of injuries and fatalities, we included occupant characteristics (age, gender), vehicle characteristics (curb weight and, in two-car crashes, the weight of the other vehicle), and crash severity (delta v) in our logistic regression models. Tables VI.1 and VI.2 present the results of our analyses of the unweighted sample for one- and two-car crashes. The predictive power of delta v dominates both models, but the driver’s age and the car’s weight also appear as significant predictors of injury in two-car crashes. In these crashes, older drivers and drivers of lighter cars were more likely to suffer injury or death. In neither model was the NCAP injury risk significantly related to hospitalization or death in either one-or two-car crashes. Injrisk = Driver injury risk Age = Age of driver Gender = Gender of driver Curbwgt = Vehicle’s curb weight Dvtotal = Total change in velocity (mph) Represents 46 restrained drivers. Injrisk = Driver injury risk Age = Age of driver Gender = Gender of driver Curbwgt = Vehicle’s curb weight Othvehwgt = Weight of other vehicle Dvtotal = Total change in velocity (mph) Collisions with vehicles weighing less than 10,000 pounds. Represents 131 restrained drivers. Tables VI.3 and VI.4 present the findings from the weighted sample and show very similar results to the unweighted sample. The strongest predictor remains the crash severity, and in two-car crashes, driver age is related to collision outcomes. The weighted sample presents two different conclusions from the unweighted sample, however. The curb weight of the vehicle falls short of statistical significance by traditional criteria, and more to the point, a significant relationship between NCAP risk scores and death or hospitalization appears. Injrisk = Driver injury risk Age = Age of driver Gender = Gender of driver Curbwgt = Vehicle’s curb weight Dvtotal = Total change in velocity (mph) Represents 8,401 restrained drivers. Injrisk = Driver injury risk Age = Age of driver Gender = Gender of driver Curbwgt = Vehicle’s curb weight Othvehwgt = Weight of other vehicle Dvtotal = Total change in velocity (mph) Collisions with vehicles weighing less than 10,000 pounds. Represents 23,514 restrained drivers. Some degree of doubt must be associated with these findings because of the nature of the sample on which they are based. NASS uses a highly complex stratified sampling design to achieve national representativeness for its relatively small sample of observations. The NASS database we used contained 21,377 observations, which when properly weighted, represent more than 9 million accidents. Unfortunately, we found only 366 instances of NCAP-tested cars that met our criteria of properly restrained drivers, and only about one-third of these could be used because of missing values on one or more variables. This drastic reduction in sample size, when combined with the highly uneven distribution of missing values across sampling strata, makes the sampling weight associated with any observation of doubtful validity. To overcome the statistical limitations of our NASS database, we turned to the Fatal Accident Reporting System (FARS). By using FARS, we looked to substantially increase the number of usable cases in the analysis in that FARS contains information on all accidents in a given year that involve at least one fatality (about 45,000 cases per year), while NASS contains only a sample of all accidents (about 3,000 cases per year). In addition, we reasoned that while FARS lacks the information on crash severity provided by NASS’ estimate of the total change in velocity, its severity of crashes was relatively homogeneous because the database is restricted to fatal—presumably severe—crashes. For this analysis, only passenger cars from the 1982 to 1991 model years were included. In addition to the actual test vehicles, our analysis included vehicles that had no substantial structural changes in model years following the tested model year. That is, if a 1984 model-year vehicle were tested in NCAP and no structural changes were made to the 1985 version of the vehicle and it was not retested, the 1985 model year was assigned the same combined injury risk score as the 1984 vehicle. We then matched the vehicles to the FARS and the R.L. Polk Vehicle Registration System (Polk) databases based on the make (that is, the manufacturer), model, model year, and body type of the vehicle. The FARS database is a compilation of all automobile accidents in the United States in any given calendar year in which at least one fatality occurred. The Polk system is a database that contains information on the types, numbers and weights of vehicles registered in a given calendar year. Data for both systems are for the calendar years 1987 through 1991. As with the analysis of NASS data, we restricted this analysis to one- and two-car frontal collisions in which the driver of the NCAP-tested vehicle was restrained by either a seat belt or an air bag. Having matched the NCAP vehicles to the FARS and Polk systems, we then calculated the fatality rates for the vehicles. This was done simply by dividing the number of fatalities by the number of registered vehicles. The fatality rates in our analysis are expressed in terms of fatalities per 100,000 registered vehicles. We then correlated the driver combined injury risk scores and fatality rates associated with vehicle models in a number of ways. First, we calculated a simple correlation using just information on those elements. Next, we regressed fatality rates on additional characteristics associated with vehicles using a Poisson model, which allows one to compare rates of individual cases, especially when the sample size is moderately large and the probability of an event occurring is either very low or very high. This type of analysis fit our needs in that a large number of cases (884) were included in our analyses while the fatality rates of the vehicles included were low (overall, there were 1,036 deaths for approximately 19 million registered vehicles). The variables added to the model held information on the model year and body style of the vehicles in the dataset. We controlled for the model year as a proxy for certain driver, vehicle, and roadway characteristics that could not be included in the model. We controlled for the body style for two reasons: (1) as a surrogate for the relationships found between specific body styles and certain driver characteristics and (2) as a rough surrogate for the weight of the vehicle. As a final analysis, we divided the NCAP injury risk distribution into quintiles and compared the fatality rates of the different groups. Each quintile represented one-fifth of the passenger cars tested in NCAP from 1982 to 1991. We found that a first-order correlation between NCAP injury risk and fatality rates exists (p = .007). When information on the body style and model year of the vehicle was included in the analysis, the strength of the relationship increased (p = .001). However, the relationship appears to be the result of the high fatality rates associated with the poorest performers in NCAP. Indeed, vehicle models within the highest quintile of injury risk (those in the highest 20 percent of the distribution) had significantly higher fatality rates than all other quintile categories. Further, we found that the worst performers on the NCAP test had injury risk probabilities approximately eight times higher than the best-scoring cars, while their fatality rates were almost 28 percent higher. The remaining four quintile groups, on the other hand, were not significantly different from one another. (See tables VI.5 and VI.6.) Thus, it seems that the relationship between driver fatality rates and predicted injury risk stems from the significantly higher fatality rates associated with vehicles that have very high NCAP injury risk probabilities. Mean combined injury risk (NCAP) Fatality rate (FARS/Polk) Fatality rate calculated by dividing the number of fatalities in a given risk quintile by the total number of registered vehicles in that quintile. Fatality rates are expressed as fatalities per 100,000 registered vehicles. The mean combined injury risk is the average NCAP injury risk for the quintile. Body styles include coupes, sedans, two- and four-door hatchbacks, and station wagons. Mean fatality rates as reported in table VI.5. A z-score of 1.96 or greater shows significant differences between the means at a probability level of at least p = .05. Not applicable. The following are GAO’s comments on the letter from the Department of Transportation dated July 13, 1994. 1. We agree with NHTSA that the terms “reliability” and “validity,” as used in the report, refer to their statistical meanings, test repeatability, and predictive validity, respectively. We share the concern that common usage of the terms “could have damaging effects” on NCAP’s credibility and mislead a casual user to conclude that the tests have not had positive effects on the crashworthiness of the U.S. passenger car fleet. For this reason, we have modified the subtitle of the report. NHTSA is correct in asserting that we use the term reliability in its traditional meaning of reproducibility. We disagree with NHTSA’s implication that this meaning is not relevant to “the relative safety performance of vehicles.” Indeed, as our report indicates, the band of uncertainty that surrounds crash test scores (as it does any test results) can affect the relative ranking of vehicles. 2. We recognize that model lines would have NCAP result variations unique to themselves, and the report clearly states this caveat (see p. 38, footnote 5). After completing the Citation experiment, NHTSA made changes in its test procedures to improve their reliability. Unfortunately, no equivalent test of how effective these changes were in reducing the variability between test scores was subsequently performed. After NHTSA had provided its official comments on our draft report, the agency also provided us with crash test results from automobile manufacturers for model-year 1991 through 1994 vehicles. These data were provided to NHTSA in preparation for tests conducted under NCAP and were results of tests that essentially duplicated the NCAP testing procedure. The agency had previously declined to provide this information because they considered it proprietary. We analyzed these data and have included our findings in the body of the report (see pp. 39-41). Though this information cannot define the boundaries of NCAP reliability, the difference between manufacturer and NCAP results reinforces our conclusion that the reliability of NCAP results is limited. 3. We do not disagree with NHTSA that rigorous protocols for crash testing are followed and that NHTSA verifies the results of the crash test with high-speed film. However, this process merely verifies that the accelerometers placed in the test dummy accurately recorded the data from the specific trial. It does not address the issue of reliability, which in classic statistical theory holds that test data are reliable if consistent results are obtained through repeated trials of an experiment using specific procedures. In the case of NCAP crash testing, a specified procedure exists, but the model of the test vehicle changes. Unless multiple trials of the same model line are conducted, we cannot determine the reliability of test results. We are sensitive to the costs the Agency could incur in addressing our recommendation that it update and publish, in clear language, its knowledge of the reliability of NCAP results. For this reason, we suggest that the Agency explore alternative means for accomplishing this goal, in particular by making use of the knowledge base developed by manufacturers. Regardless of the method the Agency uses to address the recommendation, its purpose will not be, as NHTSA suggests, to “enhance the scientific reliability of its data” or “narrow its standard deviation,” but to assure American consumers that they are provided with accurate information about the relative crashworthiness of vehicles. 4. We agree that NHTSA has found a statistically significant difference between the fatality risk of belted drivers involved in two-car frontal collisions in cars with “good” NCAP performance and those that were “poor” performers. In our analysis of the fatality rates, though using a different methodology, we found similar results. The analyses we performed shared a common weakness with NHTSA’s; namely, they were both limited to a relatively small proportion of real-world crashes. NHTSA’s estimate of reduced fatality risk for better scoring NCAP cars is derived from analyses using only two-car crashes in which both drivers were belted and at least one occupant was killed. These conditions limited their analyses to between 81 and 170 crashes. (NHTSA’s database was drawn from the 1979 through 1991 FARS years, which represent between 40,000 and 50,000 highway fatalities annually.) Our analysis was also limited to NCAP cars involved in fatal accidents with restrained drivers, although we also included single-vehicle crashes and FARS data from 1987 through 1991. These limitations reduced our sample to 884 cars. Both NHTSA and we agree that a statistically significant correlation between NCAP scores and real-world crashes can be found but, to use NHTSA’s words, the correlation is “far from perfect.” Our analyses suggested that this correlation derived from the fatality rates of the worst scoring cars, and not from crashworthiness differences among relatively good NCAP performers. 5. We generally agree with NHTSA’s comment that the improvement in NCAP scores over time has contributed to an improvement in highway safety. However, many other influences unrelated to crash testing, such as safety belt usage laws, and the toughening of drunk driving laws, have also contributed to this trend. 6. Our report does not address the purported ease of interpretation associated with the new star rating system. However, we did incorporate NHTSA’s new reporting system into our analysis of the new data provided us by NHTSA. Our findings provide detail to support the conclusion of our draft report: that a reporting system can be no more reliable than the scoring system on which it is based. We disagree that the new star rating system “eliminates . . . implied precision” of HIC, chest, and femur scores. It is true that some cars with nonsignificant differences in scores would end up in the same category under the new system, and thus correctly be presented to the public as roughly equal in crashworthiness. However, it is also true that other cars with nonsignificant score differences could be placed in different categories, a scoring artifact that incorrectly implies substantial differences in the relative levels of crash protection provided by the vehicles. For example, while a vehicle with a chest g of 40 and a HIC of 550 will receive 5 stars, one with a chest g of 40 and a HIC of 555 will receive 4 stars. We do not believe that this difference in HIC scores implies an actual crashworthiness difference. 7. NHTSA appears to suggest that it has already complied with this recommendation through the adoption of its new reporting system. We disagree. The new system, while seemingly clear, does not communicate to the public the band of uncertainty associated with star ratings. Our analyses of the manufacturers’ NCAP test results suggests that this band is sizable and illustrates its potential effects. However, additional information needs to be collected and analyzed before the precision of crash test results can be adequately defined. 8. The recommendation is to explore alternative methods for determining the crashworthiness of vehicles. We cited computer simulations as one promising avenue to explore. We recognize the limitations of the current capabilities of computer simulations (the high cost of supercomputers, the complexity of programming, and so on), and we agree with NHTSA that this technology could not replace actual crash tests in the near future. However, NHTSA’s comment suggests that it has examined the potential benefits of this rapidly emerging technology and has dismissed them. We believe that it should continue to monitor and periodically reassess them. It appears to us that as the technology develops and becomes less costly, the potential benefits of such a system extending to a much larger set of crashes than NCAP now considers at some point in the future may outweigh its costs. Other possible approaches include ones that NHTSA is already considering, such as extending the range of tests to include both side-impact crashes and frontal-offset crashes. While such tests would expand the applicability of NCAP tests to a larger portion of real-world events, they would also substantially increase the costs of the program. This consideration reinforces our belief that the costs and benefits of alternative approaches such as computer modeling need to be revisited regularly over the next decade. 9. We are aware of the limitations of the NASS database, and we agree with NHTSA that the number of cases that mimic the NCAP configuration are few. Indeed, in the 4 years of NASS data we analyzed for the project, only 46 of over 14,000 cases applied to our model most closely resembled the NCAP configuration. We disagree that the methodology was inappropriate. The models used in the analyses were designed not only to simulate the NCAP conditions, but also to discover the sensitivity of NCAP for predicting other frontal collisions, and thereby maximize the number of frontal crash configurations for which the test was applicable and meaningful. It seems reasonable to expect that, given enough cases, NCAP should predict real-world traffic injuries and fatalities in collisions that essentially duplicate the test conditions; however, given the small number of actual events that apply to this configuration, the meaning of any unweighted statistically significant relationship is questionable. 10. There is no contradiction between these statements. NCAP’s relative rankings of different models may be inaccurate in some cases. Nevertheless, it is unlikely that the parallel improvement in NCAP scores and highway safety statistics over the past 15 years is totally coincidental. 11. We agree with NHTSA’s comment about manufacturer participation in NCAP, and the language has been changed. With respect to the quasi-regulatory nature of NCAP, manufacturers repeatedly stated that they must design automobiles to meet this test as if it were the standard. In oral commentary on our draft report, one NHTSA official pointed out that almost all passenger cars meet compliance standards in the NCAP test, and that “in effect, it’s a de facto standard.” 12. The 220-point reduction in the mean HIC falls short of statistical significance (p = .171). This, we believe, is a function of the low number of cases and high variations in the early years of compliance testing. Although the decline is not statistically significant, we would reiterate that, on average, vehicles tested for compliance with FMVSS 208 tend to have HIC and chest acceleration scores that are far below the maximum allowable levels. 13. Although “it has not been proven,” it seems reasonable to assume that a 1979 vehicle that was involved in an accident in 1991 no longer had the same level of structural integrity as it did when it was new, owing to the rusting of the frame, for example, or the weakening of welds. It also seems reasonable to assume that a 1979 model-year vehicle would not perform at its full original safety potential in a collision that occurred in 1991. 14. As our report states, and NHTSA cites, no statistical analysis can, by itself, establish cause-and-effect linkage, and we do not demand this result of our analyses. 15. We accepted NHTSA’s suggestion and used SUDAAN to perform additional analyses of the NASS data. The results were inconclusive. In one case (two-vehicle collisions), we found a statistically significant relationship between NCAP scores and serious injury. (See table VI.4.) However, this result could easily be spurious since the application of NASS sampling weights (which vary substantially and can be quite large) to the small subset of cases that both fit our criteria and have no missing data can greatly distort the analysis. If, as NHTSA suggests, a subset of NASS data is “insufficient to conduct any type of statistical analyses,” applying sampling weights to a nonrandom selection of variously weighted cases is potentially misleading. 16. We are aware that in some analyses, NHTSA has used the combination of subject vehicle weight and its ratio to the other vehicle weight as predictors of injury instead of simply using the weight of the two vehicles. We did not feel it necessary to reanalyze the data using weight and weight ratio since, as NHTSA has pointed out, they “are mathematically equivalent to the information provided by the two individual vehicle weights.”17. We used the traditional adjustment, the ratio of fatalities to the number of registered vehicles, to correct for the variations in exposure to accident involvement among the NCAP-tested vehicles. We agree with NHTSA that other factors, such as driver age and driving history, are also important predictors of accident involvement and are not captured by this adjustment. Our goal here, however, was to answer the simple question: Are proportionately more drivers killed in poor scoring NCAP cars than in better scoring cars? Our answer is “yes.” 18. Based on NHTSA’s comment, we converted HIC and chest g scores to the combined injury probability, which forms the basis for NHTSA’s new rating system, and used it as a variable in the analyses conducted and presented in this report. 19. The section is no longer in the report. Alem, Nabih M., Guy S. Nusholtz, John W. Melvin. “Head and Neck Response to Axial Impacts.” Proceedings of the Twenty-Eighth Stapp Car Crash Conference (SAE Paper No. 841667). Warrendale, Pa.: Society of Automotive Engineers, 1984. Pp. 275-82. Association for the Advancement of Automotive Medicine and the International Research Council on the Biomechanics of Impact. The Biomechanics of Impact and Motor Vehicle Crash Performance: A Global Concern—Conference Materials. Des Plaines, Ill.: AAAM, 1993. Backaitis, Stanley H., ed. Biomechanics of Impact Injury and Injury Tolerances of the Head-Neck Complex (SAE Book No. PT-93-43). Warrendale, Pa.: SAE, 1993. Blalock, Hubert M., Jr. Social Statistics. New York: McGraw-Hill, 1979. Dixon, Camille M. “Automotive Crashworthiness Rating: Legislation and Testing.” Proceedings of the Thirtieth Stapp Car Crash Conference (SAE Paper No. 862046). Warrendale, Pa.: SAE, 1986. Eiband, A.M. Human Tolerance to Rapidly Applied Accelerations: A Summary of the Literature (NASA Memorandum No. 5-19-59E). Washington, D.C.: National Aeronautics and Space Administration, 1959. Eppinger, Rolf H., and Susan C. Partyka. “Estimating Fatality Reductions With Safety Improvements.” Proceedings of the Eighth International Technical Conference on Experimental Safety Vehicles. Washington, D.C.: U.S. Department of Transportation, National Highway Traffic Safety Administration, 1980. Pp. 432-38. Foster, J. King, James O. Kortge, and Michael J. Wolanin. “Hybrid II: A Biomechanically-Based Crash Test Dummy.” Proceedings of the Twenty-First Stapp Car Crash Conference (SAE Paper No. 770938). Warrendale, Pa.: SAE, 1977. Pp. 975-1014. Fraser, T.M. Human Response to Sustained Acceleration” (NASA-SP-103). Washington, D.C.: NASA Scientific and Technical Information Division, 1966. Gadd, C.W. “Criteria for Injury Potential.” Impact Acceleration Stress Symposium, 27-29 November 1961, Brooks AFB, Texas (National Research Council Pub. No. 977). Washington, D.C.: National Academy of Sciences, 1962. Pp. 141-44. Gadd, Charles W. “Use of a Weighted-Impulse Criterion for Estimating Injury Hazard.” Proceedings of the Tenth Stapp Car Crash Conference (SAE Paper No. 660793). Warrendale, Pa.: SAE, 1966. Pp. 164-74. General Motors. MVSS: A Guide To Federal Motor Vehicle Safety Standards & Regulations. Detroit: GM, Environmental Activities, 1989. Gennarelli, Thomas A. “The State of the Art of Head Injury Biomechanics.” Proceedings of the Twenty-Ninth Conference of the American Association for Automotive Medicine. Des Plaines, Ill.: AAAM, 1985. Pp. 447-63. Gillis, Jack. The Car Book: The Definitive Buyer’s Guide to Car Safety, Fuel Economy, Maintenance and More, 1993 ed. New York: Harper Perennial, 1993. Gurdjian, E.S., et al. “Significance of Relative Movements of Scalp, Skull, and Intracranial Contents During Impact Injury of the Head.” Journal of Neurosurgery, 29:1 (1967), 70-72. Gurdjian, E.S., V.L. Roberts, and L.M. Thomas. “Tolerance Curves of Acceleration and Intracranial Pressure and Protective Index in Experimental Head Injury.” Journal of Trauma, 6:5 (1965), 600-04. Gurdjian, E.S., J.E. Webster, and H.R. Lissner. “Observations on the Mechanism of Brain Concussion, Contusion, and Laceration.” Surgery, Gynecology, and Obstetrics, 101:12 (1965), 680-90. Hirsch, Arthur E., and Rolf H. Eppinger. “Impairment Scaling From the Abbreviated Injury Scale.” Proceedings of the Twenty-Eighth Conference of the American Association for Automotive Medicine. Morton Grove, Ill.: AAAM, 1984. Pp. 209-24. Hodgson, V.R., L.M. Thomas, and P. Prasad. “Testing the Validity and Limitations of the Severity Index.” Proceedings of the Fourteenth Stapp Car Crash Conference (SAE Paper No. 700901). Warrendale, Pa.: SAE, 1970. Hodgson, Voigt, and L.M. Thomas. “Comparison of Head Acceleration Injury Indices in Cadaver Skull Fracture.” Proceedings of the Fifteenth Stapp Car Crash Conference (SAE Paper No. 710854). Warrendale, Pa.: SAE, 1971. Pp. 190-206. Holbourn, A.H.S., and M.A. Edin. “Mechanics of Head Injuries.” Lancet, 245 (1943), 438-41. Joksch, Hans C. “Velocity Change and Fatality Risk in a Crash—A Rule of Thumb.” Accident Analysis and Prevention, 25:1 (1993), 103-04. Jones, Ian S., and R.A. Whitfield. “Predicting Injury Risk With New Car Assessment Program Crashworthiness Ratings.” Accident Analysis and Prevention, 20:6 (1988), 411-19. Jones, Ian S., R.A. Whitfield, and Diane M. Carroll. “New Car Assessment Program Results and the Risk of Injury in Actual Accidents.” Proceedings of the Tenth International Technical Conference on Experimental Safety Vehicles. Washington, D.C.: DOT, NHTSA, 1985. Pp. 371-80. Koch, M., et al. “Car Model Safety Rating—Further Development Using the Paired Comparison Method.” Proceedings of the Eighth International Technical Conference on Experimental Safety Vehicles. Washington, D.C.: DOT, NHTSA, 1980. Pp. 432-38. Kornhauser, Murray. Structural Effects of Impact. Baltimore: Spartan Books, 1964. Langwieder, K. “Passenger Injuries in Collisions and Their Relation to General Speed Scale.” Proceedings of the Seventeenth Stapp Car Crash Conference. New York: SAE, 1973. Pp. 1-34. Langwieder, Klaus, Maximilian Danner, and Walter Schmelzing. “Comparison of Passenger Injuries in Frontal Car Collisions With Dummy Loadings in Equivalent Simulations.” Proceedings of the Twenty-Third Stapp Car Crash Conference (SAE Paper No. 791009). Warrendale, Pa.: SAE, 1979. Pp. 201-31. Mackey, John M., and Charles L. Gauthier. Results, Analysis and Conclusions of NHTSA’s 35 MPH Frontal Crash Test Repeatability Program (SAE Paper No. 840207). Warrendale, Pa.: SAE, 1984. McCormick, Earnest James. Human Factors in Engineering and Design. New York: McGraw-Hill, 1982. Marquardt, James F. “Collision Severity—Measured By V.” Proceedings of the Twenty-First Conference of the American Association for Automotive Medicine. Morton Grove, Ill.: AAAM, 1977. Pp.379-90. Newman, James A. “Head Injury Criteria in Automotive Crash Testing.” Proceedings of the Twenty-Fourth Stapp Car Crash Conference (SAE Paper No. 801317). Warrendale, Pa.: SAE, 1980. Pp. 703-47. Partyka, Susan C. “A Comparison of AIS and ISS Predictions of Fatality on NCSS.” Proceedings of the Twenty-Fourth Conference of the American Association for Automotive Medicine. Morton, Grove, Ill.: AAAM, 1980. Pp. 156-69. Pike, Jeffrey A. Automotive Safety: Anatomy, Injury, Testing and Regulation. Warrendale, Pa.: SAE, 1990. Prasad, Priya, and Harold J. Mertz. The Position of the United States Delegation to the ISO Working Group 6 on the Use of HIC in the Automotive Environment (SAE Paper No. 851246). Warrendale, Pa.: SAE, 1985. Salvendy, Gavriel. Handbook of Human Factors. New York: John Wiley & Sons, 1987. Society of Automotive Engineers. Crash Avoidance SP-544: International Congress & Exposition, 1983 (SAE Pub. No. SP-83-544). Warrendale, Pa.: 1983. Society of Automotive Engineers. Human Tolerance to Impact Conditions as Related to Motor Vehicle Design (SAE Pub. No. SAE-J885-APR80). Warrendale, Pa.: 1980. Swearingen, John J. Tolerances of the Human Face to Crash Impact (FAA Pub. No. AM-65-20). Oklahoma City: Federal Aviation Agency Office of Aviation Medicine, 1963. Thomas, C., et al. “Crashworthiness Rating System and Accident Data Convergences and Divergences.” Advances in Belt Restraint Systems: Design, Performance and Usage (SAE Paper No. 840200). Warrendale, Pa.: SAE, 1984. U.S. Department of Transportation. Appendix AO (A-Optional): Part 572E Dummy Performance Calibration Test Procedure. Washington, D.C.: NHTSA, 1993. U.S. DOT. A Collection of Recent Analyses of Vehicle Weight and Safety (DOT-HS-807-677). Washington, D.C.: NHTSA, 1991. U.S. DOT. Correlation of NCAP Performance With Fatality Risk in Actual Head-On Collisions (DOT-HS-808-061). Washington, D.C.: NHTSA, 1994. U.S. DOT. Federal Motor Vehicle Safety Standards and Regulations (DOT-HS-805-674). Washington, D.C.: NHTSA, 1989. U.S. DOT. Head and Neck Injury Criteria: A Consensus Workshop (DOT-HS-806-434). Washington, D.C.: NHTSA, 1983. U.S. DOT. Laboratory Indicant Test Procedure: New Car Assessment Program. Washington, D.C.: NHTSA, 1990. U.S. DOT. Laboratory Test Procedure for: FMVSS 208, Occupant Crash Protection; FMVSS 212, Windshield Mounting; FMVSS 219, Windshield Zone Intrusion; and FMVSS 301, Fuel System Integrity. Washington, D.C.: NHTSA, 1993. U.S. DOT. New Car Assessment Program: Plan for Responding to the Fiscal Year 1992 Congressional Directives. Washington, D.C.: NHTSA, 1992. U.S. DOT. New Car Assessment Program: Response to the NCAP Fiscal Year 1992 Congressional Requirements—Report to Congress. Washington, D.C.: NHTSA, 1993. U.S. DOT. The New Car Assessment Program—Status and Effect. Washington, D.C.: NHTSA, 1982. U.S. DOT. 1991 Traffic Fatalities Preliminary Report. Washington, D.C.: NHTSA, 1992. U.S. DOT, National Transportation Safety Board. Safety Effectiveness Evaluation of the National Highway Traffic Safety Administration’s Rule-Making Process. Vol. 11: Case History of Federal Motor Vehicle Safety Standard 208: Occupant Crash Protection (NTSB-SEE-79-5). Washington, D.C.: U.S. Government Printing Office, 1979. Unterharnscheidt, F.J. “Translational Versus Rotational Acceleration— Animal Experiments With Measured Input.” Proceedings of the Fifteenth Stapp Car Crash Conference (SAE Paper No. 710880). Warrendale, Pa.: SAE, 1971. Pp. 767-70. Versace, John. “A Review of the Severity Index.” Proceedings of the Fifteenth Stapp Car Crash Conference (SAE Paper No. 710881). Warrendale, Pa.: SAE, 1971. Pp. 771-96. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the National Highway Traffic Safety Administration's (NHTSA) crash test programs, focusing on whether they provide valid and reliable indicators of occupant safety in real world crashes. GAO found that: (1) the probability of sustaining a serious injury has declined since the inception of NHTSA test programs; (2) cars marketed in the United States have become more crashworthy; (3) the consistency of New Car Assessment Program (NCAP) test results is questionable and its unreliable data may lead to misinformed purchasing decisions; (4) NCAP ability to predict a vehicle occupant's protection in real world crashes is limited, since NCAP results can only be applied to frontal collisions; and (5) there is a statistically significant relationship between fatality rates and NCAP-predicted injuries, however, these high fatality rates are associated with the poorest NCAP performers.
About 14 percent of the near elderly are uninsured—a rate comparable to that of 45- to 54-year-olds and lower than that among the entire nonelderly population. Differences in labor force attachment, health status, and family income, however, distinguish the near elderly from younger Americans and foreshadow some of the difficulties this age cohort could have in accessing health insurance other than that offered by an employer. The near elderly are a group in transition from the active workforce to retirement. Almost three-quarters of those between the ages of 55 and 61 were employed in 1996, and about half worked full time. In contrast, however, less than one-half of those between the ages of 62 and 64 were employed at all during 1996, with only about one-quarter working full time. Concurrent with leaving the workforce, both the health and income of this group are beginning to decline (see app. I). Compared with individuals between the ages of 45 and 54, the near elderly are more likely to experience health conditions such as diabetes, hypertension, and heart disease. In addition, the near elderly are the most frequent users of many health care services. Their hospital discharge rates and days of hospital care were 51 percent and 66 percent higher, respectively, than those of 45- to 54-year-olds. Furthermore, their expenditures on health care services are estimated to be about 45 percent higher than those of the younger group, while their median family income is about 25 percent less. through the individual market and Medicare. It is not surprising that the near elderly are among the most likely age groups to have insurance and the least likely to be uninsured. Because aging is associated with greater use of health care services, the importance attached to having health insurance should increase with age. In fact, the extent to which the near-elderly purchase individual insurance suggests that this is the case. Whether the near elderly obtained their health insurance through the individual market or through public sources was related to their employment, health, and income status. For example, a relatively high percentage of the near elderly with individual insurance reported that they worked (67 percent) and had excellent or good health (85 percent). In contrast, those with public sources of coverage were more likely to report that they were unemployed (87 percent) or in poor health (69 percent). And compared with those who purchased individual insurance, twice as many with public coverage had incomes under $20,000. The relationship between insurance status and income is not entirely predictable, however, since about 20 percent of the uninsured near elderly had family incomes of $50,000 or more, while almost one-third of those with individual insurance earned less than $20,000. Despite their limited resources, about the same share of the near elderly with low incomes purchased individual insurance as did those with higher incomes. Given the cost of comprehensive coverage in the individual market, those with lower incomes may be purchasing less expensive, limited-benefit products. At the same time, however, income alone may not be the only resource available to individuals. a higher percentage of both groups had low incomes, were minorities, were not working, or were in poor health. Again, however, there were important differences, as the uninsured were more likely to work, be married, have better health, and have higher incomes than those with public insurance. While an estimated 60 to 70 percent of large employers offered retiree health coverage during the 1980s, fewer than 40 percent do so today, and that number is continuing to decline despite the recent period of strong economic growth. Surveys from two benefit consulting firms show that the number of employers offering coverage to early retirees dropped by 8 to 9 percentage points between 1991 and 1997 (see fig. 2). Concurrently, employment has shifted away from firms more likely to offer coverage, that is, from manufacturing to service industries. The decision by some large employers not to offer retiree health benefits will primarily affect future retirees. In fact, one survey sponsored by the Department of Labor suggests that very few of those who were retired in 1994—only about 2 percent—had lost coverage as a result of an employer’s subsequent decision to terminate retiree coverage. income in 1994 on health care—an amount that includes not only insurance premiums or employer-required cost sharing but also out-of-pocket expenses for copayments, deductibles, and services not covered by health insurance. (App. II compares the affordability of employer-based early retiree health insurance with that purchased in the individual market.) At the same time employers have increased retiree cost sharing, they have also tightened the eligibility requirements for participation in postemployment health benefits. Most firms now have a minimum service and age requirement, and some tie their own contribution to these minimums. For example, one employer we interviewed required retirees to have 35 years of service to qualify for the maximum employer contribution of 75 percent. In contrast, retirees with 19 years of service are eligible for only a 30-percent employer contribution. Furthermore, if workers change jobs frequently, especially as they become older, they may not qualify for retiree health benefits in the future. According to surveys sponsored by the Labor Department in 1988 and 1994, higher costs for individuals could result in fewer participating in employer-based retiree health plans when such coverage is available. Between 1988 and 1994, the proportion of workers who continued coverage into retirement declined by 8 percentage points. Among those already retired, the proportion covered also declined, falling 10 percentage points over the same 6-year period. Of the approximately 5.3 million retirees who discontinued employer-based benefits in 1994, an estimated 27 percent cited the expense as a factor—up by over one-fifth from the earlier survey. For some retirees, coverage with lower cost sharing through a working or retired spouse may have influenced their decision to decline health benefits from a former employer. are eligible to elect continuation coverage if their former employer had 20 or more workers and offered health insurance. Because the employer is not required to pay any portion of the premium, COBRA may be an expensive alternative for the near elderly—especially since the loss in employer-based coverage is probably accompanied by a decrease in earnings. In 1997, the annual per-employee cost of health insurance for employer-based coverage was about $3,800. However, there is significant variation in premiums as a result of differences in firm size, benefit structure, locale, demographics, or aggressiveness in negotiating rates. For early retirees in one company, annual premiums in 1996 for family coverage ranged, depending on the plan, from about $5,600 to almost $8,000. Since this firm paid the total cost of practically all of the health plans it offered to current workers, the COBRA cost would have come as a rude awakening to retirees. The limited information available on eligibility for and use of COBRA by Americans in general and the near elderly in particular leaves many important questions unanswered. On the one hand, the data suggest that relatively few near elderly use COBRA; on the other hand, compared with younger age groups, 55- to 64-year-olds are more likely to elect continuation coverage. One database suggests that, on average, 61- to 64-year-olds only keep continuation coverage for a year. The fact that it makes sense for the near elderly who lack an alternate source of coverage and can afford the premium to elect COBRA raises concerns among employers about the impact on overall employer health insurance costs. Employers contend that COBRA’s voluntary nature and high costs that result from the lack of an employer subsidy or contribution could result in the enrollment of only those individuals who expect their health care costs to exceed the premium. The costs of near-elderly COBRA enrollees in excess of the premium would, in turn, push up the employer’s overall health care expenditures. However, there is no systematically collected evidence on the extent to which such elections affect employer costs. The election of COBRA coverage by some near elderly as well as younger individuals may simply reflect an antipathy to living without health insurance. On the other hand, since COBRA election is associated with job turnover, the demographics of a firm or industry will also affect an employer’s insurance costs. For example, a firm with an older workforce that does not offer retiree health benefits may indeed experience higher insurance costs as a result of COBRA elections. In the majority of states, some individuals aged 55 to 64 may be denied coverage in the individual insurance market, may have certain conditions or body parts excluded from coverage, or may pay premiums that are significantly higher than the standard rate. Unlike employer-sponsored coverage, in which risk is spread over the entire group, premiums in the individual markets of many states reflect each enrollee’s demographic characteristics and health status. For example, on the basis of experience, carriers anticipate that the likelihood of requiring medical care increases with age. Thus, a 60-year-old in the individual market of most states pays more than a 30-year-old for the same coverage. Likewise, a carrier may also adjust premiums on the basis of its assessment of the applicant’s health status. This latter process is called medical underwriting. Since health tends to decline with age, some near elderly may face serious obstacles in their efforts to obtain needed coverage through the individual market. On the basis of the underwriting results, a carrier may deny coverage to an applicant determined to be in poorer health. Individuals with serious health conditions such as heart disease and diabetes are frequently denied coverage, as are those with such non-life-threatening conditions as chronic back pain and migraine headaches. The most recent denial rates for carriers with whom we spoke in February 1998 ranged from zero in states where guaranteed issue is required to about 23 percent, with carriers typically denying coverage to about 15 percent of all applicants. Carriers may also offer coverage that excludes a certain condition or part of the body. A person with asthma or glaucoma, for example, may have all costs associated with treatment of those conditions excluded from coverage. products available in the individual markets of Colorado and Vermont are at least 10 percent and 8.4 percent, respectively, of the 1996 median family income of married near-elderly couples. In contrast, the average retiree contribution for employer-subsidized family coverage is about one-half of these percentages. While at least 27 states have high-risk insurance pools that act as a safety net to help ensure that individuals with health problems can obtain coverage, the cost is generally 125 to 200 percent of the average or standard rate charged to healthy individuals in the individual market for a comparable plan. Individuals who have been rejected for coverage by at least one carrier generally qualify for their state’s high-risk pool. However, participation in some state pools is limited by enrollment caps. In addition to state initiatives, federal standards established by HIPAA guarantee some people leaving group coverage access to the individual market—a guarantee referred to as group-to-individual portability. Each state establishes a mechanism so that these “HIPAA eligibles” have access to coverage regardless of their health status, and insurance carriers may not impose coverage exclusions. To be eligible for a portability product, however, an individual must have had at least 18 months of coverage under a group plan without a break of more than 63 days, and have exhausted any COBRA or other conversion coverage available. One survey estimates that 61- to 64-year-olds typically remain enrolled in COBRA for only 12 months—6 to 24 months short of exhausting COBRA coverage. Since HIPAA changes the incentives for electing and exhausting COBRA coverage, past evidence may not be a guide to future use. However, depending on their state’s mechanism, the premiums faced by unhealthy individuals who are eligible for a HIPAA product, like those faced by unhealthy individuals who have always relied on the individual market for coverage, may be very expensive. baby-boom generation. Experts are divided about the impact on employer-based coverage of actions that increase costs for the private sector, such as increasing the eligibility age for Medicare. In responding to Medicare’s financial crisis, policymakers need to be aware of the potential for the unintended consequences of their actions. In addition to events that could affect the erosion in employer-based retiree coverage, use of the HIPAA guaranteed-access provision by eligible individuals may improve entry into the individual market for those with preexisting health conditions who lack an alternative way to obtain a comprehensive benefits package. Depending on the manner in which each state has chosen to implement HIPAA, however, cost may remain an impediment to such entry. Since group-to-individual portability is only available to qualified individuals who exhaust available COBRA or other conversion coverage, HIPAA may lead to an increased use of employer-based continuation coverage. Moreover, additional state reforms of the individual market may improve access and affordability for those who have never had group coverage or who fail to qualify for portability under HIPAA rules. Mr. Chairman, this concludes my statement. I will be happy to answer your questions. Rate per 1,000 people per year Rate per 1,000 people per year Average length of stay (days) Using data from the March 1997 CPS and 1995 and 1996 information on insurance premiums, we estimated the percentage of median income that a 55-to 64-year-old would have to commit to health insurance under a number of possible scenarios, including purchasing coverage through the individual market in a community-rated state (Vermont) as well as one that had no restrictions on the premiums that could be charged (Colorado), using 1996 rates for a commonly purchased health insurance product; and cost sharing under employer-based coverage using 1995 Peat Marwick estimates of the lowest, highest, and average retiree contribution. While no official affordability standard exists, research suggests that older Americans commit a much higher percentage of their income to health insurance than do younger age groups. Congressional Budget Office calculations based on data from the Bureau of Labor Statistics’ Consumer Expenditure Survey indicate that between 1984 and 1994, spending by elderly Americans aged 65 and older on health care ranged from 10.2 percent to 12.9 percent of household income. In 1994, elderly Americans spent 11.2 percent of household income, about three times as much as younger age groups. These estimates include costs other than premiums or employer-imposed cost sharing—for example, copayments, deductibles, and expenditures for medical services not covered by insurance. Table II.1 compares the cost of health insurance purchased in the individual market and employer-imposed cost sharing for early retirees with the median income for the near elderly in 1996. As demonstrated by table II.1, the near elderly’s share of employer-subsidized coverage is generally lower than that for coverage purchased through the individual market. For example, on average, employer-based family coverage for retirees at $2,340 annually represents 4.7 percent of median family income. In contrast, costs in the individual market can be significantly higher—in part because they lack an employer subsidy. In Colorado, the annual premium for a commonly purchased individual insurance product in 1996 was about $2,500 for single coverage and $5,000 for a couple—representing about 12 percent and 10 percent, respectively, of median income for 55- to 64-year-olds. While less expensive than the Colorado example, premiums for health insurance through the individual market in Vermont—a community-rated state—would represent 9.9 percent of median income for single coverage and 8.4 percent of median income for a couple. For more than one-half of the near elderly, these individual market costs typically exceed average health care spending for Americans under age 65—in some cases significantly. In April 1998, the Center for Studying Health System Change reported that older adults who purchased individual coverage typically spent a considerably higher proportion of their income on premiums than other adult age groups—about 9 percent for the 60- to 64-year-old group. Preferred provider organization/$250 Preferred provider organization/$500 Preferred provider organization/$500 $214-$602 (low end and high end)$160-$309 (rural/urban) The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed access to health insurance by near-elderly Americans aged 55 to 64, focusing on the near elderly's: (1) health, employment, income, and health insurance status; (2) ability to obtain employer-based health insurance if they retire before they are eligible for Medicare; and (3) access to individually purchased coverage or employer-based continuation insurance and the associated costs. GAO noted that: (1) the overall insurance picture of the near elderly is no worse than that of other segments of the under-65 population and is better than that of some younger age groups; (2) the current insurance status of the near elderly is largely due to: (a) the fact that many current retirees still have access to employer-based health benefits; (b) the willingness of near-elderly Americans to devote a significant portion of their income to health insurance purchased through the individual market; and (c) the availability of public programs to disabled 55- to 64-year-olds; (3) the individual market and Medicare and Medicaid for the disabled often mitigate declining access to employer-based coverage for near-elderly Americans and may prevent a larger portion of this age group from becoming uninsured; (4) the steady decline in the proportion of large employers who offer health benefits to early retirees, however, clouds the outlook for future retirees; (5) in the absence of countervailing trends, it is less likely that future 55- to 64-year-olds will be offered health insurance as a retirement benefit, and those who are will bear an increased share of the cost; (6) access and affordability problems may prevent future early retirees who lose employer-based health benefits from obtaining comprehensive private insurance; (7) the two principal private insurance alternatives are continuation coverage under the Consolidated Omnibus Budget Reconciliation Act of 1985 (COBRA) and the individual market; (8) although 55- to 64-year-olds who become eligible for COBRA are more likely than younger age groups to enroll, the use of continuation coverage by early retirees is relatively low; (9) with respect to individual insurance, the cost may put it out of reach of some 55- to 64-year-olds; (10) some states have taken steps to make individual insurance products more accessible; (11) for eligible individuals leaving group coverage who exhaust any available COBRA or other conversion coverage, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) guarantees access to the individual market, regardless of health status and without coverage exclusions; (12) since the new federal protections under HIPAA hinge on exhausting COBRA, the incentives for enrolling and the length of time enrolled could change; and (13) the premiums faced by some individuals eligible for a HIPAA guaranteed-access product, however, may be substantially higher than the prices charged to those in the individual market who are healthy.
The purpose of the CMS PERM program is to produce a national-level improper payment error rate for Medicaid. CMS developed PERM in order to comply with the requirements of IPIA, which was amended by IPERA. PERM uses a 17-state, 3-year rotation for measuring Medicaid improper payments. Medicaid improper payments are estimated on a federal fiscal year basis through the PERM process. The estimate measures three component error rates: (1) fee-for-service (FFS), (2) managed care, and (3) eligibility. FFS is a traditional method of paying for medical services under which providers are paid for each service rendered. Each selected FFS claim is subjected to a data processing review. The majority of FFS claims also undergo a medical review. Managed care is a system where the state contracts with health plans to deliver health services through a specified network of doctors and hospitals. Managed care claims are subject only to a data processing review. Eligibility refers to meeting the state’s categorical and financial criteria for receipt of benefits under the Medicaid program. States perform their own eligibility reviews according to state and federal eligibility criteria. See appendix II for additional details on these three components. CMS uses its PERM Manual to provide detailed guidance for implementing CMS regulations on PERM. PERM regulations set forth the methodology for states to estimate Medicaid improper payments and outline the requirements for state CAPs. Figure 1 shows the PERM process for estimating and reducing Medicaid improper payments. Through its use of federal contractors, CMS measures the FFS and managed care components while states perform the eligibility component measurement. CMS contracts with two vendors—a statistical contractor and a review contractor—to conduct the FFS and managed care review components of PERM and calculate error rates. The statistical contractor is responsible for (1) collecting and sampling claims and payment data for review, including performing procedures to ensure that the universe is accurate and complete; (2) reviewing state eligibility sampling plans; and (3) calculating state and national error rates. The review contractor is responsible for conducting data processing and medical reviews after the statistical contractor selects the samples of claims. Beginning with the fiscal year 2011 measurement cycle, state-specific sample sizes are calculated based on the prior measurement cycle’s component-level error rates and precision. All payment error rate calculations for the Medicaid program (the FFS component, managed care component, eligibility component, and overall Medicaid error rate) are based on the ratio of estimated dollars of improper payments to the estimated dollars of total payments. The overall Medicaid error rate represents the combination of FFS, managed care, and eligibility error rates. Individual state error rate components and state overall Medicaid error rates are combined to calculate the national component error rates and national overall Medicaid error rate. PERM accounts for the overlap between claims and eligibility reviews by calculating a small correction factor to ensure that Medicaid eligibility errors do not get “double counted” if the sampled item was also tested in either the FFS or managed care components. National component error rates and the national overall Medicaid program error rate are weighted by state size in terms of outlays, so that a state with a $10 billion Medicaid program “counts” 10 times more toward the national rate than a state with a $1 billion Medicaid program. For fiscal year 2011 reporting—the reporting period covered by our audit—CMS reported an estimated national Medicaid improper payment error rate of 8.1 percent or $21.9 billion ($21,448 million in overpayments and $453 million in underpayments). The weighted national component error rates are as follows: for Medicaid FFS, 2.7 percent; for Medicaid managed care, 0.3 percent; and for Medicaid eligibility, 6.1 percent. See appendix III for the state and national error rates for HHS’s fiscal year 2011 reporting of Medicaid improper payments. See appendix IV for the national Medicaid outlays and the estimated improper payment error rate reported in HHS’s AFRs for fiscal years 2007 to 2011. On February 4, 2009, the Children’s Health Insurance Program Reauthorization Act of 2009 (CHIPRA) was enacted. As required under Section 601 of CHIPRA, HHS published a final rule on August 11, 2010, effective September 30, 2010, which requires that PERM eligibility reviews be consistent with the state’s eligibility verification policy rather than reviewing eligibility against a single, federal methodology, which was done in the past. After publication of the final rule, states were allowed to review cases under the new methodology. Figure 2 shows the roll up of the error rate reported for fiscal year 2011. IPIA, as amended, requires the heads of federal agencies to report on the actions the agency is taking to reduce improper payments, including a description of the causes of improper payments identified, actions planned or taken to correct those causes, and the planned or actual completion date of the actions taken to address those causes. This law also requires heads of federal agencies to report on a description of the steps the agency has taken to ensure that agency managers, programs, and, where appropriate, states and localities are held accountable through annual appraisal criteria for (1) meeting applicable improper payment reduction targets and (2) establishing and maintaining sufficient internal controls, including an appropriate control environment that effectively prevents improper payments from being made and promptly detects and recovers improper payments that are made. According to OMB’s implementing guidance for IPERA, agencies should utilize the results of their statistical sampling measurements to identify the root causes of improper payments and implement corrective actions to prevent and reduce improper payments associated with these root causes. Agencies should continuously use their improper payment measurement results to identify new and innovative corrective actions to prevent and reduce improper payments. Agencies should also annually review their existing corrective actions to determine if any existing action can be intensified or expanded, resulting in a high-impact, high return on investment in terms of reduced or prevented improper payments. While CMS has responsibility for interpreting and implementing the federal Medicaid statute and ensuring that federal funds are appropriately spent—including estimating improper payments—the program is administered at the state level with significant state financing. Consequently, CMS relies primarily on states to develop and implement CAPs to address reported PERM errors. Following each measurement cycle, the states included in the measurement are required to complete and submit a CAP based on the errors found during the PERM process. In addition to guidance in the PERM Manual, CMS provides guidance to states on the CAP process upon releasing the PERM error rates and throughout CAP development. CMS’s PERM methodology for reporting a national Medicaid program improper payment estimate is statistically sound and meets OMB requirements. However, the process for accumulating the data used in deriving the reported national estimate does not consider the extent of any significant changes in state-level improper payment data that occurred after the initial year-end cutoff for state reporting. The impact of any such significant changes in states’ PERM reviews that were not concluded by the annual measurement cycle cutoff dates could significantly affect the calculation of the rolling 3-year average national Medicaid error rate reported each year. The design of CMS’s PERM methodology meets OMB requirements. CMS has documented the steps it took to design the sample and the steps taken to construct the sampling frame for the FFS, managed care, and eligibility review samples in its PERM Manual. The documentation also includes CMS’s process for ensuring that each sampling frame was accurate, timely, and complete. For error rate measurement for the FFS and managed care components, as outlined in the PERM Manual, CMS uses a stratified random sample selected quarterly within each state to provide cases for the data processing and medical review testing. For the eligibility component, as outlined in CMS’s PERM Manual, states use a simple random sample of eligible cases and negative cases, which are drawn each month during the measurement cycle. Absent an alternate methodology specifically approved by OMB, agencies must obtain a statistically valid estimate of the annual amount of improper payments in programs and activities for those programs that are identified as susceptible to significant improper payments. The estimates are to be based on the equivalent of a statistically random sample of sufficient size to yield an estimate with a 90 percent confidence interval of not more than plus or minus 2.5 percentage points around the estimate of the percentage of improper payments. CMS reports national Medicaid error rates at this 90 percent confidence interval to be consistent with OMB’s requirements, but CMS’s procedures provide that the sample size for PERM is to conform to OMB optional guidance for estimating payment errors—specifically, the PERM Manual specifies a target precision of plus or minus 3 percentage points at a 95 percent level of confidence within each state. The PERM Manual provides for the sample size for each state to be based upon the previous payment error rate and the OMB optional standard for the precision and confidence level. To estimate the percentage of dollars paid in error, CMS’s PERM Manual provides for using a ratio estimation methodology to produce the PERM estimate. This means the PERM payment error rate is a ratio of the estimated total dollars paid in error divided by the estimated total payments. The choice of ratio estimation methodology under these circumstances is statistically appropriate. The PERM Manual describes the data collection methods for the medical reviews, data processing reviews, and eligibility determinations. The PERM Manual also describes the statistical ratio estimation methodology to be used to produce the estimated percentage of dollars paid in error. CMS’s PERM Manual also provides for the error rates and summary reports to be provided to each state participating in the measurement cycle. We found that CMS’s PERM Manual is consistent with OMB statistical guidance. Although the CMS PERM methodology is statistically sound, CMS did not have procedures for considering the impact of any revisions to state-level error rates in calculating the national error rate after the cutoff date for each of the 3 measurement years. Specifically, the individual state error rates used to calculate the national error rate are not updated to reflect activities occurring after the PERM cycle cutoff. Without a process to consider these more current data on states’ reported improper payment error rates, the reliability of CMS’s reported national estimate may be adversely affected. OMB has identified as a best practice that agencies should establish a policy for handling unscheduled corrections to data, such as including threshold criteria identifying conditions under which data will be corrected and redisseminated. According to the PERM Manual, a state may request a new error rate calculation from CMS after the cycle cutoff date for informational purposes and for determining sample sizes for the next cycle under certain circumstances. For example, states may request a recalculation when information supporting a claim as correctly paid was submitted to CMS after the cycle cutoff date—but CMS’s review contractor did not have time to complete the review—or when a mistake made by the PERM contractor was identified. This request must be made within 60 business days of the posting date of the state’s program error rate on the CMS review contractor’s website. In such instances, CMS will issue a revised rate to the state. However, each state’s official error rate—used in the calculation of the national Medicaid error rate—will not change as a result of this recalculation. According to CMS, official error rates will be calculated based on information received by the cycle cutoff date. While CMS aims for a cycle cutoff date of July 15—4 months prior to the reporting date—the CMS cycle manager may extend the cycle cutoff date depending on the progress of the PERM reviews. CMS officials acknowledged that historically CMS has had to postpone the cycle cutoff to allow the process to be as complete as possible while still permitting CMS to report an improper payment rate timely in HHS’s AFR. However, after the cutoff date, CMS’s PERM Manual does not allow for any revisions to be factored into a state’s official error rate. In reviewing the results of state PERM reviews, we identified some instances where CMS issued revised state Medicaid error rates. For example, CMS issued a revised rate to one state for its eligibility reviews for the fiscal year 2008 measurement cycle because in January 2010, two months after error rate reporting, CMS and the state discovered that the amount of dollars in error was reported incorrectly by the state. This revised overall state error rate estimate decreased from 20.8 percent to 7.8 percent. In another example for the same fiscal year 2008 measurement cycle, in December 2009, 1 month after error rate reporting, CMS issued a post-cutoff date revised rate to a state for its FFS reviews because CMS received additional documentation from providers after the cycle cutoff date for official error rate calculations. This revised overall state error rate estimate decreased from 6.4 percent to 5.9 percent. These revised percentages were not included in the official error rates used to calculate the national estimate of Medicaid improper payments. While these were both smaller states and the actual impact on the national error rate would be minimal, CMS’s PERM Manual does not provide for CMS to consider the impact and it is possible that these types of changes would have had an impact on the national error rate reported in the subsequent 2 years if the changes were significant and were for states with larger levels of outlays. Because the national error rate is based on 3 years of data and corrections to the 2 years of older data after the cutoff date are not officially recognized by CMS, the entire 3-year cycle could be affected. As a result, the reported estimate of Medicaid improper payments may be adversely affected if needed corrections are significant. This potentially affects CMS’s ability to accurately report on the extent of improper payments, evaluate program performance, and utilize its own resources, as well as state resources, effectively to identify and reduce improper payments. CMS and state agencies developed CAPs that were generally responsive to identified payment errors. However, CMS’s PERM Manual does not provide for addressing all nonpayment errors either by identifying specific corrective actions or by analyzing these errors to determine whether actions, if cost effective, are needed. Also, CMS’s PERM Manual does not identify conditions under which corrective action for an error should not be undertaken because the cost of state corrective actions would outweigh the benefit. In addition, not all required elements of the CAPs are being completed by all states and CMS’s written guidance on these required elements is not clear or consistent. Further, CMS’s internal guidance on monitoring state CAPs is not sufficient to help ensure that states’ CAPs contain all of the required elements and that states prevent and reduce improper payments going forward. States are responsible for developing, executing, and evaluating CAPs to address specific errors identified during the PERM reviews, and CMS has reported on other initiatives to supplement state corrective actions and help reduce errors. We found that state CAPs were generally responsive to the types of payment errors identified in the PERM reviews. Through PERM, CMS identifies and classifies types of errors and shares this information with each state. States are then to analyze and determine the root causes for their specific improper payments. According to CMS, in addition to the PERM Manual, it provides guidance to state contacts on the CAP process upon providing the PERM error rates and throughout the CAP development. As reported by CMS, and shown in figure 3, overall, the majority of the errors reported in fiscal year 2011 (about 54 percent) for the Medicaid program—based on the fiscal years 2008 to 2010 measurement cycles— were a result of cases reviewed for eligibility, where recipients were either not eligible (25.3 percent) or where their eligibility status could not be determined (28.2 percent). The most common causes of cases in error for the FFS medical review was insufficient documentation (9.2 percent) or no documentation (4.3 percent). Our analysis of error types is shown in appendix VI. As shown in figure 3, almost 42 percent of reported PERM review errors resulted from documentation deficiencies, including either a lack of or insufficient documentation, or because a definitive review decision could not be made because of a lack of or insufficient documentation (undetermined). As these are common types of errors, CMS has reported on certain corrective actions that states have developed to address them. Specifically: No documentation and insufficient documentation. In about 14 percent of all PERM errors, reviewers identified errors because either the provider did not respond to the request for records within the required time frame (no documentation—4.3 percent) or there was not enough documentation to support the service (insufficient documentation—9.2 percent). According to CMS, because much of the error rate in the past was due to missing or insufficient documentation, the majority of states focused on provider education and communication methods to improve the providers’ responsiveness and timeliness. Undetermined. In about 28 percent of all PERM errors over the 3-year period, reviewers were unable to determine whether or not a beneficiary was eligible for Medicaid because the case record lacked or contained insufficient documentation. The PERM Manual outlines the due diligence a state must take before citing the case as “undetermined.” According to CMS, specific corrective action strategies implemented by the states to reduce these types of eligibility errors have included leveraging technology and available databases to obtain eligibility verification information without client contact; providing additional caseworker training, particularly in areas determined by the PERM review to be error prone; and providing additional eligibility policy resources through a consolidated manual and web-based training. In addition to the state-specific CAPs that are developed in response to the PERM findings, CMS has reported on other initiatives to lower error rates in HHS’s fiscal year 2011 AFR. For example, to help address the insufficient documentation errors found in medical reviews, CMS reported that it increased its efforts to reach out to providers and to obtain medical records to help resolve this problem. CMS also reported that it gives states more information on the potential impact of these documentation errors and more time for the states to work with providers to resolve them. Table 1 outlines CMS’s reported overall strategies to reduce improper payments and strategies targeted at specific PERM error types. Although all states developed CAPs that were generally responsive to the payment errors identified through PERM reviews, we were unable to assess the CAPs’ impact on the improper payment error rate because of limited comparative data between PERM measurement cycle years. State CAPs did not always address errors identified during PERM reviews that did not have a payment error amount associated with them. Specifically, we identified three types of these nonpayment errors through our analysis of the PERM process that are not consistently addressed in all state CAPs—negative case errors, deficiencies, and technical errors. A negative case error occurs when a state incorrectly denies an application or terminates eligibility. A deficiency is generally defined as an action or inaction on the part of the state or the provider that could have resulted in a dollar error but did not. A technical error is an error where the eligibility caseworker did not act in accordance with state or federal policy, but this did not result in an erroneous eligibility determination or result in a difference between the amount that was paid and the amount that should have been paid. CMS’s PERM Manual requires that states test negative cases as part of their eligibility reviews. However, it does not clearly require that states address negative case errors in their CAPs. While a payment error rate is not calculated because there are no payments associated with negative cases, a negative case error rate is calculated to estimate the percentage of the decisions in which eligibility was incorrectly denied or terminated. Our analysis showed that for fiscal year 2011 reporting, approximately 40 percent of the states where negative case errors were identified did not address negative case errors in their CAPs. According to CMS officials, these negative errors should be included in state CAPs. While deficiencies do not result in a dollar amount in error and therefore had no impact on the payment error rate for fiscal year 2011, they may represent issues that need to be addressed to prevent future payment errors. Although not considered payment errors, some deficiencies were noted during PERM data processing and medical reviews. Examples of deficiencies identified in FFS and managed care reviews include the following: A data processing deficiency in which a male was coded as a female in the system but because the service provided could have been appropriate for either sex, it did not result in a dollar difference. A medical deficiency wherein although a provider billed for the wrong procedure code, the correct procedure code would have paid the same rate per unit. Therefore, it did not result in a dollar difference but could have under other circumstances. Our analysis showed that deficiencies identified in PERM reviews represented approximately 8 percent of the total FFS and managed care errors identified for the fiscal year 2011 reporting, and that approximately 67 percent of these deficiencies were not included or analyzed in state CAPs. In addition, only 10 of the 43 states with deficiencies addressed these deficiencies in their CAPs. While the PERM Manual does not clearly state that CAPs are to address deficiencies, CMS officials told us that states should address deficiencies in their CAPs. During eligibility reviews, states may identify technical errors. An example of a technical error is a failure to follow state administrative procedures that do not affect eligibility if acceptable documentation is otherwise obtained that supports beneficiary eligibility. According to the PERM Manual, states are not currently required to report these technical errors to CMS and may document technical errors as appropriate during the PERM reviews. Furthermore, the PERM Manual suggests but does not require that states include an analysis of technical errors and related corrective actions in their CAPs. Although these nonpayment errors did not result in improper payment amounts, they represent internal control deficiencies that could have prevented eligible beneficiaries from receiving Medicaid benefits or may result in improper payments in future years if not addressed. Not clearly requiring states to address nonpayment errors, or to document that sufficient analysis was performed to determine if corrective actions, if cost effective, are needed, may reduce the effectiveness of CAPs for addressing the underlying causes of improper payments. Further, this may inhibit ongoing efforts to prevent and reduce improper payments and to ensure that Medicaid is provided to all eligible beneficiaries. OMB’s implementing guidance for IPERA requires agencies to implement corrective actions to prevent and reduce improper payments. In addition, CMS’s PERM regulations and its PERM Manual require each state to complete and submit a CAP based on errors found during the PERM process. However, while specifically allowing states to exclude eligibility technical errors, the PERM Manual does not clearly identify whether the states should consider or include deficiencies or negative case errors in their CAPs. While the PERM Manual does not clearly state that CAPs are to address both deficiencies and negative case errors, CMS officials told us that states should address both of these in their CAPs. Although CMS’s PERM Manual requires each state to complete and submit a CAP based on the errors found during the PERM process, this guidance makes no exception for small errors—sometimes caused by rounding—which may result in states incurring costs to implement corrective actions that exceed the benefits of those actions. In its PERM Manual, CMS encourages states to use the most cost-effective corrective actions that can be implemented to best correct and address the root causes of the errors; however, it does not acknowledge that states can address errors by documenting situations where they determined that the costs of implementing the corrective action exceed the benefits. Officials at one state we visited told us that the cost of implementing a system to correct some of its errors that were less than a dollar would outweigh the benefits of this action. A PERM review in this state identified 11 pricing errors resulting from incorrect rounding that netted to $0.53. State officials informed us that they were aware of this rounding issue, as it had been identified in the previous PERM cycle and CMS also identified and reported this type of error for the fiscal year 2011 measurement cycle. According to this state, the original estimate for a system solution to correct these rounding errors was $575,000 to $1,150,000. State officials told us they did not believe that the cost to address this issue was justified as the return on investment for the system solution to correct the condition might never be realized. According to CMS, in e-mail communication with this state, it told state officials that if the state determines that the cost of implementing a corrective action outweighs the benefits then the final decision of implementing the corrective action is the state’s decision. The state continued to pursue corrective actions and was ultimately able to obtain a revised estimate of $115,000 for changes to the system, based on further detailed analysis of the necessary solution. The state now plans to redesign its system in order to avoid these types of PERM errors going forward. According to Standards for Internal Control in the Federal Government, management should design and implement internal controls—in this case, controls to prevent and reduce improper payments—based on the related costs and benefits. Further, PERM regulations require states to evaluate their corrective action plans by assessing, among other things, the efficiencies that they create. However, the lack of clear written guidance for states on how to address situations where the cost of corrective actions identified by states may outweigh the benefits because of the low dollar amounts associated with these types of errors may result in an unnecessary burden on state resources. Although we found that states have generally been engaged in the PERM CAP process and developed CAPs to address improper payment errors, not all required elements of the CAPs are being completed by all states. When developing CAPs, CMS’s PERM regulations require states to perform five key steps to reduce improper payment errors identified through the PERM reviews. For CAPs subsequent to the initial measurement year, CMS’s PERM regulations also require an update on the previous CAP. These requirements are summarized in figure 4. Not all required elements of the CAPs—such as the evaluation step or the update on the previous CAP—were consistently reported on by all states. For example, for fiscal year 2011 reporting, 8 of the 51 states did not submit the required evaluation element of the CAP. An additional 9 states submitted the evaluation element for some, but not all PERM components. Furthermore, for fiscal year 2011 reporting, only 24 of the 34 states required to submit an update of the previous CAP complied with this requirement. Another 5 states submitted updates for some, but not all, of the PERM components, and of the 29 states that submitted complete or partial updates of their previous CAPs, only 19 submitted them by the due date required by CMS. The other 10 were submitted after CMS followed up with the states. CMS officials acknowledged that some state CAPs are missing certain elements, and they are in the process of finalizing specific procedures to outline CMS’s role in reviewing state CAPs and following up with states to obtain any missing elements, as discussed later in this report. CMS’s PERM Manual, updated in September 2011, provides guidance for state CAP development, but it does not include specific instructions for completing the evaluation element or on how to report the update on the previous CAP. Furthermore, the CAP template included in the PERM Manual does not include these two required elements. However, on its PERM website, CMS has provided a separate example of a CAP for the states to utilize that includes examples of the evaluation element and a separate report for the update on the previous CAP. Inconsistencies between the PERM Manual—which includes a CAP template—and the example CAP on the PERM website may cause confusion regarding what states are to include in their CAPs. As of August 2012, CMS had updated its PERM Manual and the CAP template to include instructions and a template for reporting on the update of the previous CAP. However, the updated template still did not include the evaluation element, and the separate example of a CAP on the PERM website was not updated to be consistent with the updated PERM Manual guidance and template. Clear, consistent written guidance and instructions on all required elements for CAPs would assist the states in submitting complete CAPs, and increase the likelihood that CMS has the information necessary for analyzing the progress and effectiveness of state CAPs. The lack of clear, consistent guidance in the PERM Manual and the related template on the PERM website on how to develop key elements of the state CAP may have contributed to the missing elements we describe in this report. CMS lacked a formal policy describing its role in monitoring state CAPs to ensure that (1) the CAPs contained all of the required elements and completely addressed errors identified in the PERM reviews and (2) states were making progress on implementing corrective actions. In our high-risk series update, we reported that CMS needs to ensure that states develop appropriate corrective action processes to address vulnerabilities to improper Medicaid payments. Our analysis of state CAPs continues to identify issues regarding CMS’s coordination with states in developing and implementing their CAPs. Specifically, during our review and analysis of state CAPs for the fiscal years 2008 to 2010 PERM measurement cycles, we found that CMS had not conducted sufficient oversight to ensure that states submitted complete CAPs, took the five required steps in developing CAPs, and updated the status of previous CAPs. As discussed previously, not all required elements of the CAPs—such as the evaluation step or the update on the previous CAP—were being completed by all states. Once the CAPs are submitted, officials in the seven states we visited noted that there was minimal monitoring of implementation by CMS. For example, officials in one state told us that CMS did not follow-up with the state on the implementation of the corrective actions until the state submitted the CAP related to its next error rate measurement 3 years later. According to CMS officials, they do not track the progress of the states’ implementation of CAPs and are not required to do so. However, CMS officials told us that they review the implementation information that the states provide in their CAPs, specifically in the update of their previous CAPs, and hope to see a reduction in error rates as the CAPs are implemented. Additionally, based on our analysis of state CAPs for fiscal year 2011 reporting, we also noted that approximately 5 percent of all payment errors identified during the PERM reviews were not fully addressed by all states in their CAPs. Improved monitoring by CMS would help ensure that state CAPs contain all of the required elements and are addressing all types of errors identified through the PERM process, and that the actions identified are appropriate to reduce those types of errors going forward. The responsibility for oversight of the states’ development, implementation, and evaluation of their CAPs rests with the Division of Error Rate Measurement (DERM) within CMS’s Office of Financial Management. These efforts include coordinating the CAP process with the states and other agency offices. The Medicaid Integrity Group (MIG) within CMS’s Center for Program Integrity is responsible for reviewing the state CAPs, with assistance from the agency’s regional offices. According to CMS, MIG reviews the state CAPs to (1) ensure the plans address the errors identified during the PERM reviews, (2) provide feedback to the states for improvements, and (3) review the implementation status of the state’s previous CAP. Oversight through continuous monitoring helps ensure that actions are taken to effectively work toward reducing improper payments. According to OMB’s implementing guidance, agencies must ensure that their managers and accountable officers, program and program officials, and where applicable states and local partners are held accountable for reducing improper payments. Therefore, although the states are responsible for developing, implementing, and monitoring their CAPs, CMS should be responsible for monitoring states’ compliance with CMS’s regulations related to the PERM process. We also found that the roles and responsibilities of DERM and MIG are not formally outlined in policies and procedures for the PERM review and corrective action process. CMS officials told us that they are in the process of developing protocols to address the CAP review process. Specifically, CMS officials told us that they have developed a draft policy describing each party’s role in the different stages of the PERM CAP process as well as a review guide to outline CMS’s procedures for coordinating reviews of state CAPs. CMS plans to review state CAPs submitted in February 2013 using this new collaborative process for the first time for the states that are part of the fiscal year 2011 measurement cycle and were reported on in HHS’s fiscal year 2012 AFR. According to CMS officials, they plan to review the CAPs to ensure that all of the attributes outlined in the PERM regulations are addressed and, as needed, notify the states of any missing elements. After reviewing the fiscal year 2011 cycle CAPs, CMS officials told us that they plan to further refine the standard operating procedures and CAP review guide before the documents are finalized. CMS’s draft policy and review guide were not finalized before the completion of our fieldwork, and we did not examine any interim drafts. Thus, we are unable to determine whether the planned revisions to existing procedures will fully address the deficiencies we identified concerning CMS’s monitoring of state CAPs. Monitoring is CMS’s opportunity to ensure that states are appropriately implementing the corrective actions that they have identified to help reduce improper payments. If states are not addressing all applicable issues or are not effectively implementing the actions outlined in their CAPs, future reductions in the Medicaid error rate may be limited. Additional monitoring by CMS would help hold the states accountable for developing, implementing, and evaluating corrective action strategies in support of CMS’s efforts to prevent and reduce Medicaid improper payments. The design of CMS’s PERM methodology is statistically sound. However, refining the required PERM process for estimating and reporting national Medicaid improper payments so that the impact of corrections to the data after the cutoff date is considered would help ensure that the reported estimates are reasonably accurate and complete. As CMS reports its estimated Medicaid improper payments based on a rolling 3-year estimate, adjustments made to any of these 3 years can affect yearly reporting and potentially affect the accuracy of the reported national estimate. Given the importance of providing HHS management, OMB, and the Congress with accurate information on the extent of improper payments in federal programs, it is imperative that CMS ensure that its reported estimates of Medicaid improper payments are reliable. Corrective actions are critical for preventing and reducing improper payments. While states have developed corrective action plans to address payment errors identified in PERM reviews, not all nonpayment errors were addressed in these plans, which could hinder the prevention of future improper payments. Also, while states are currently required to address all errors, clear written guidance that permits states to document why an action is not being implemented would help ensure the most efficient and effective use of state resources for errors that do not pose a risk of significantly affecting future improper payments. Further, ensuring that states have clear written guidance for developing corrective action plans is key to CMS’s ability to oversee states’ corrective action processes. Strengthening CMS’s required procedures for monitoring the state-level corrective actions is critical to help ensure that states make progress in preventing and reducing improper payments. In order to ensure the accuracy of reported improper payment estimates for the Medicaid program, we recommend that the Secretary of HHS direct the CMS Administrator to take the following action: Update PERM Medicaid improper payment reporting procedures to provide for considering any corrections to state-level improper payment error data subsequent to the cutoff date that would have a significant impact on any of the 3 years used to develop the rolling average for the reported national Medicaid improper payment estimate. To help ensure that corrective action strategies effectively address identified types of improper payments and reduce Medicaid improper payments in a cost-effective manner, we recommend that the Secretary of HHS direct the CMS Administrator to take the following three actions: Revise the PERM Manual to provide that states (1) analyze all deficiencies, negative case errors, technical errors, and minimal dollar errors identified in PERM reviews to determine if any corrective actions, if cost effective, are needed to prevent such errors in the future and (2) document the results of their analysis. Clarify guidance in the PERM Manual, and on the PERM website, on the required elements to be included in a CAP and the specific actions states are to take each measurement cycle to (1) effectively prepare and evaluate their current cycle’s CAPs and (2) provide updates to their previous cycle’s CAPs. Finalize draft policies and procedures to clarify specific CMS officials’ roles and responsibilities for monitoring states’ corrective actions to ensure, at a minimum, that (1) the CAPs contain all of the required elements and completely address errors identified in the PERM reviews and (2) states are making progress on implementing corrective actions. We provided a draft of this report to the Secretary of HHS for comment. In its written comments, reprinted in appendix VII, HHS concurred with the four recommendations in our report. HHS cited a number of actions already taken and other initiatives planned or under way related to our recommendations. For example, with respect to our three recommendations to help ensure that corrective action strategies effectively address identified types of improper payments and reduce Medicaid improper payments in a cost-effective manner, HHS cited CMS’s plans to update its PERM Manual and other relevant documents consistent with our recommended actions to clarify and standardize guidance. HHS also cited action under way to finalize policies and procedures related to monitoring states’ corrective actions. HHS also concurred with our recommendation to update procedures for considering the impact of any corrections to state-level improper payment errors on reported national error rates. HHS stated that it will consider revising its procedures in this area. HHS also expressed concern that the draft suggests that past reported national Medicaid error rates were unreliable. We acknowledged in our draft report that the prior year post- cutoff date error rate revisions we reviewed were not sufficient to have had an impact on the national error rate for fiscal year 2011 reporting. Rather, our recommendation is focused on augmenting procedures to help ensure the reliability of future national error rate reporting. HHS also expressed concern about our suggestion that OMB’s Standards and Guidelines for Statistical Surveys should be used to determine how to handle PERM-related data corrections. In our draft report, we characterized this as a best practice. HHS noted, and we agree, that OMB did not include guidance for handling unscheduled corrections to data in its implementing guidance for IPERA. However, taking action, as we recommended, to establish procedures to consider the extent to which any corrections to state-level improper payment data subsequent to the cutoff date would affect the reported national Medicaid improper payment estimate would best ensure the reliability of reported national error rates going forward. HHS also expressed concern about our suggestion that states may request a recalculation of the state-level error rate when records for a medical claim were received prior to the cycle cutoff date but CMS’s review contractor did not have time to complete the review. HHS cited that CMS’s review contractors will complete all reviews for claims where the documentation was received prior to the cycle cutoff date and that states may request a recalculation when information supporting a claim as correctly paid was submitted to CMS after the cycle cutoff date. We agreed with HHS’s point and modified the report accordingly. HHS also expressed concern about including the state error rates identified in appendix III of the draft. HHS commented that readers may use the rates to make state-to-state comparisons that are inappropriate because of variations in states' sizes and programs and in states’ implementation and administration of their programs. We acknowledged HHS’s concerns in our draft report by including language in appendix III to caution readers about using these state-level rates to make state-to-state comparisons. However, it is important to present these state-level error rates for transparency regarding the results of state PERM reviews. In addition, HHS provided technical comments that we incorporated as appropriate and discussed in our additional evaluation in appendix VII. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VIII. The objectives of this report were to determine the extent to which (1) the Centers for Medicare & Medicaid Services’ (CMS) methodology for estimating Medicaid improper payments follows Office of Management and Budget (OMB) guidance and produces reasonable national and state-level estimates and (2) corrective action plans (CAP) have been developed to reduce Medicaid payment error rates and whether these plans addressed the types of payment errors identified. To address these objectives, we reviewed the Improper Payments Information Act of 2002 (IPIA), the Improper Payments Elimination and Recovery Act of 2010 (IPERA), and related OMB guidance effective for fiscal year 2011. We also reviewed CMS regulations on Payment Error Rate Measurement (PERM) and CMS’s internal written guidance on PERM. In addition, we reviewed results from state PERM reviews for fiscal years 2006 through 2011, prior GAO and Department of Health and Human Services (HHS) Office of Inspector General reports, and internal control standards. Further, we reviewed improper payment information reported in the Improper Payments Section of HHS’s fiscal year 2011 agency financial report (AFR). We reviewed these documents to understand CMS’s efforts to address IPIA and IPERA requirements and to identify previously reported issues with CMS’s improper payment reporting. To further determine the extent to which CMS’s methodology for estimating Medicaid improper payments follows OMB guidance and produces reasonable national and state-level estimates, we compared the following components of CMS’s methodology for estimating the fiscal year 2011 payment error rate with related OMB guidance: (1) sampling methods, including the sample size, sample selection, sample representation, and precision of the estimates, and (2) statistical methods used to estimate the error rates and precision. As part of this assessment, we did the following: Conducted interviews with CMS officials and its contractors to clarify our understanding of both the sampling and estimation methodologies. Reviewed the program manuals for both the payment error and eligibility payment error components of PERM to assess the statistical validity of CMS’s methodology. Reviewed professional statistical literature to validate the suitability of stratified random sampling and ratio estimation to address the particular characteristics of the payment and eligibility data in the state-administered Medicaid program. Reviewed state-level payment error rates from the most recent year available to determine whether the sample sizes assigned to states met the precision level for payment error sampling in OMB statistical guidance. We also used the results of these reviews and analyses to identify and assess the reasons for any weaknesses in the estimation methodology and their potential effects on identifying and reporting Medicaid improper payment estimates for fiscal year 2011 and going forward. In addition to reviewing the statistical methodology, we obtained actual payment error data from CMS for the seven states selected for our site visits and independently calculated the payment error rates to confirm the calculations done by CMS using the statistical methodology specified in the program manuals. The basis for our site visit selection is discussed later in this appendix. The scope of our review did not include an assessment of individual states’ processes or payment systems. We assessed the reliability of the claims and error rate data by gaining an understanding of the processes the contractors or states use to perform their reviews, including any use of data sharing to determine eligibility, and their quality controls. We determined that the data were sufficiently reliable for our purposes. To further determine the extent to which CAPs have been developed to reduce Medicaid payment error rates and whether these plans addressed the types of errors identified, we did the following: Reviewed agency policies and procedures related to the development of PERM CAPs and CAPs for all 50 states and the District of Columbia, which are used to address the root causes of improper payments identified from the PERM reviews. Conducted interviews with officials from CMS related to its oversight role and its own initiatives for reducing Medicaid improper payments. Reviewed CMS’s error rate reduction plans and initiatives to reduce Medicaid improper payments. Reviewed the reported causes of improper payments as outlined in HHS’s fiscal year 2011 AFR. Assessed CMS’s process for monitoring state corrective actions and its methodology for measuring the effectiveness of corrective actions to reduce improper payments. As part of our review of states’ CAPs, we assessed whether they addressed issues identified in fee-for-service, managed care, and evaluated the effectiveness of implemented corrective actions. eligibility reviews; included the required elements as outlined by CMS; and The scope of our review did not include an assessment of individual states’ implementation of their CAPs. In addition, we conducted site visits at seven state Medicaid offices (California, Florida, Illinois, Michigan, Pennsylvania, South Carolina, and Texas). During these site visits, we interviewed state personnel involved in the PERM process to gain an understanding of how states compile the universes of claims and beneficiaries that are sampled for the PERM reviews, how eligibility reviews are conducted, and how the states develop corrective action plans and work with CMS on corrective actions. We selected these states based on criteria such as the states’ federal share of Medicaid payments and errors identified in PERM reviews. The seven states we visited collectively claimed about 37 percent of the total federal share of Medicaid payments made in fiscal year 2010, the most recent data available at the time of our review for site visit selection. We also selected these states to achieve variation in the error rates found during PERM reviews included in the fiscal year 2011 reporting of the Medicaid improper payment estimate. One state had the highest error rate for eligibility reviews as well as the highest combined error rate. This selection also allowed us to focus on certain states with noted vulnerabilities in program integrity efforts, as well as states with possible best practices. Although it does not allow us to generalize findings to all states and thus the program as a whole, we believe these state visits, combined with our analysis of CAPs for all states, enable us to determine if states’ corrective actions are addressing the types of improper payment errors that have been identified. We conducted this performance audit from February 2012 to March 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Payment Error Rate Measurement (PERM) program uses a 17-state, 3-year rotation for measuring Medicaid improper payments. Medicaid improper payments are estimated on a federal fiscal year basis through the PERM process. The estimate measures three component error rates: (1) fee-for-service (FFS), (2) managed care, and (3) eligibility. FFS is a traditional method of paying for medical services under which providers are paid for each service rendered. Managed care is a system where the state contracts with health plans to deliver health services through a specified network of doctors and hospitals. The health plan is then responsible for reimbursing providers for specific services delivered. States submit quarterly adjudicated claims data from which a randomly selected sample of FFS and managed care claims are drawn each quarter. Each selected FFS claim is subjected to a data processing review. The majority of FFS claims also undergo a medical review. Managed care claims are subject only to a data processing review. A data processing error is a payment error that can be determined from the information available from the claim or from other information available in the state Medicaid system, other related systems, as well as outside sources of provider verification (except medical reviews and eligibility reviews). Data processing errors include, but are not limited to, the following: payment for duplicate items, payment for noncovered services, payment for FFS claims for managed care services, payment for services that should have been paid by a third party but were inappropriately paid by Medicaid, pricing errors, logic edit errors, data entry errors, and managed care payment errors. A medical review error is an error that is determined from a review of the medical documentation in conjunction with state and federal medical policies and information presented on the claim. Medical review errors include, but are not limited to, the following: lack of documentation, insufficient documentation, procedure coding errors, diagnosis coding errors, number of unit errors, medically unnecessary services, policy violations, and administrative errors. Eligibility refers to meeting the state’s categorical and financial criteria for receipt of benefits under the Medicaid program. States perform their own eligibility reviews according to state and federal eligibility criteria. An eligibility error occurs when a person is not eligible for the program or for a specific service and a payment for the service or a capitation payment covering the date of service has been made. An eligibility error can also occur when a beneficiary has paid the incorrect amount toward an assigned liability amount or cost of institutional care. The results from the eligibility reviews will include eligibility errors based on erroneous decisions as well as payment errors. The Centers for Medicare & Medicaid Services (CMS) combines the state-reported eligibility component payment error rates to develop a national eligibility error rate for Medicaid. This rate is calculated from the active case payment review findings. For fiscal year 2011 reporting, CMS estimated that the active case error rate was 8.2 percent while the weighted eligibility component error rate was 6.1 percent. Eligibility reviews are also performed on a sample of negative cases. Negative cases contain information on a beneficiary who applied for benefits and was denied or whose program benefits were terminated based on the state agency’s eligibility determination in the month that eligibility is reviewed. CMS calculates only a case error rate for negative cases, because no payments were made. The negative case error rate estimates the percentage of the decisions in which eligibility was incorrectly denied or terminated. For fiscal year 2011 reporting, CMS estimated that the negative case error rate was 4.9 percent. The results of all PERM reviews, including the negative case reviews, are used to determine future sample sizes. According to the Centers for Medicare & Medicaid Services (CMS), states’ Medicaid improper payment error rates identified through the Payment Error Rate Measurement (PERM) program may vary because of multiple factors related to differences in how states implement and administer their programs and should be considered in the context of these differences and operational realities. CMS provides each state its specific error rate and data analysis reports to use to develop corrective actions designed to reduce major error causes and to identify trends in errors or other factors for purposes of reducing improper payments. Also, according to CMS, because of the variation of states’ sizes, overall program variations, and different ways that each state’s rate affects the national rate, CMS does not encourage comparisons based solely on error rates. PERM is designed to produce precise error rates at the national level. Therefore, according to CMS, sample sizes per state are relatively small and the precision of state-specific error rates varies significantly. In addition, during the fiscal years 2008 and 2009 measurement cycles, CMS noted instances where some states’ policies differed from CMS’s policies for determining PERM errors. For example, according to CMS, in the review of some eligibility cases, policy and operational differences among states may have affected the degree to which states and providers could obtain documentation to validate payments and eligibility decisions for PERM purposes. According to CMS, states that have simplified eligibility documentation rules through use of self-declaration and administrative renewal often found it harder to obtain necessary documentation for PERM reviews, which were treated as errors for PERM. In its fiscal year 2011 agency financial report (AFR), the Department of Health and Human Services (HHS) reported that as required under Section 601 of the Children’s Health Insurance Program Reauthorization Act of 2009, it published a final rule on August 11, 2010, effective September 30, 2010, which required the eligibility reviews to be consistent with the state’s eligibility verification policy rather than reviewing eligibility against a single, federal methodology, which was done in the past. After publication of the final rule, states were allowed to review cases under the new methodology. HHS also reported that based on current regulations, certain cases from the fiscal years 2008 and 2009 measurement cycles, included in the error rates below, would no longer be considered as errors. Table 2 provides a list of state error rates used to determine HHS’s fiscal year 2011 reporting of national Medicaid improper payments. Table 3 provides a list of Medicaid outlays and estimated improper payment error rates reported in the Department of Health and Human Services’ (HHS) agency financial reports (AFR). Table 4 provides the margins of error at the 90 percent confidence level for error rate data presented in figure 2. Table 5 provides a list of error types identified during the fiscal years 2008 to 2010 Payment Error Rate Measurement (PERM) measurement cycles. The following are GAO’s comments on the Department of Health and Human Service’s (HHS) letter dated March 13, 2013. 1. See the “Agency Comments and Our Evaluation” section of this report. 2. We agree with HHS’s comment and modified the report as appropriate. 3. We agree in part with HHS’s comment and incorporated clarifying language to the figure source and Payment Error Rate Measurement (PERM) process details. Also, we added a figure note to acknowledge that certain year 1 and year 2 activities may be delayed until years 2 and 3, respectively. 4. We clarified the report to acknowledge that the Centers for Medicare & Medicaid Services’ (CMS) written guidance does not indicate that states could address an error by stating why an action is not being implemented. This relates to our second recommendation, with which HHS concurred, that such guidance should be formally documented in CMS’s PERM Manual. In addition to the contact named above, Phillip McIntyre (Assistant Director), Gabrielle Fagan, Kerry Porter, and Carrie Wehrly made key contributions to this report. Also contributing to this report were Carl Barden, Sharon Byrd, Francine DelVecchio, Patrick Frey, Wilfred Holloway, Jason Kelly, and Jason Kirwan.
Medicaid has the second-highest estimated improper payments of any federal program that reported such data for fiscal year 2011. Also, the Congress has raised questions about reporting and corrective actions related to the Medicaid program's improper payments. The objectives of this report were to determine the extent to which (1) CMS's methodology for estimating Medicaid improper payments follows OMB guidance and produces reasonable national and state-level estimates and (2) corrective action plans have been developed to reduce Medicaid payment error rates and whether these plans address the types of payment errors identified. To address these objectives, GAO analyzed CMS's policies and procedures against federal guidance and standards for estimating improper payments and developing related corrective actions to address errors. GAO also reviewed the results of all state-level reviews and conducted site visits at selected states that either received relatively large amounts of Medicaid payments or had varying rates of estimated improper payments, including states with possible best practices. GAO also met with cognizant CMS officials and contractors. The Centers for Medicare & Medicaid Services' (CMS) methodology for estimating a national improper payment rate for the Medicaid program is statistically sound. However, CMS's procedures did not provide for updating state data used in its methodology to recognize significant corrections or adjustments after the cutoff date. The Office of Management and Budget (OMB) requires that federal agencies establish a statistically valid methodology for estimating the annual amount of improper payments in programs and activities susceptible to significant improper payments. CMS developed the Payment Error Rate Measurement (PERM) program in order to comply with improper payment estimation and reporting requirements for the Medicaid program. Under the PERM methodology, CMS places states in one of three cycles, and each year one of the cycles reports new state-level data based on the previous year's samples. CMS then calculates the national Medicaid program improper payment estimate using these new data for one-third of the states and older data for the other two-thirds of the states. CMS's estimated national improper payment error rate for fiscal year 2011 for the Medicaid program was 8.1 percent, or $21.9 billion. However, CMS's procedures did not provide for considering revisions to state-level Medicaid program error rates used in the CMS methodology for calculating its national Medicaid program error rate. Because corrections to the 2 years of older data after the cutoff date are not officially recognized by CMS, the entire 3-year cycle could be affected. OMB has identified as a best practice that agencies should establish a policy for handling unscheduled corrections to data. Until CMS establishes procedures for considering changes to initially reported state-level error rates that would be significant to the national error rate, CMS is impaired in its ability to ensure that its reported estimate of the extent of national Medicaid improper payments is reliable. CMS and state agencies developed corrective action plans (CAP) related to identified PERM payment errors. However, GAO identified the following areas where improvements were needed in CMS's written guidance to states on CAPs to ensure efficient and effective actions to reduce improper payments. CMS's PERM Manual did not clearly identify the circumstances under which states should consider, and if cost effective include, nonpayment errors (such as certain coding errors that could have but did not result in a payment error) and minimal dollar errors in their CAPs. The PERM Manual and the associated website did not provide complete and consistent information on the required elements to include in a state CAP. CMS guidance did not clearly delineate CMS officials' roles and responsibilities for conducting oversight of (1) state CAP submissions to ensure that they contained all of the required elements and adequately addressed errors identified in the PERM reviews and (2) states' progress in implementing CAP corrective actions. Although the nonpayment errors identified in PERM reviews did not result in improper payments, the underlying issues may result in improper payments in future years if not addressed. Also, complete information in state CAPs is necessary for CMS to analyze the progress and effectiveness of the CAPs. Further, clear accountability for continuous monitoring helps ensure that actions are taken to effectively reduce Medicaid improper payments. GAO is making four recommendations to help improve CMS's reporting of estimated Medicaid improper payments and its related corrective action process. The Department of Health and Human Services concurred with GAO's recommendations and cited a number of actions under way and planned.
A number of entities are involved in the global supply chain, including the following: Importers: Bring cargo from a foreign source into a domestic market. Importers are responsible for submitting ISF data, but an importer may designate an authorized agent to file the ISF on its behalf. Vessel carriers: Transport cargo from a foreign port to a U.S. port. For foreign cargo remaining on board (FROB), the carrier is considered the importer and is required to submit the ISF for the shipment. Licensed customs brokers: Assist in clearing cargo through customs by preparing and filing proper entry forms, advising importers on duties to be paid, and arranging for delivery of imported goods to the destination. They also may act as the designated agent for importers in submitting their ISFs. Shippers: Supply or own the commodities that are being shipped. Non-vessel operating common carriers: Buy shipping space on a vessel, through a special arrangement with a vessel carrier, and resell the space to individual shippers. Importers are responsible for submitting the ISF, and the required ISF data elements differ depending on the cargo’s destination. For cargo bound for the United States as the final destination, the rule requires importers to submit an ISF-10 to CBP 24 hours prior to vessel loading. For cargo transiting the United States, but for which the United States is not the final destination, the rule requires importers to submit an ISF-5 to CBP prior to loading. See table 1 for further details on the ISF-10 and ISF-5 required data elements. Carriers transporting containers are to submit the Additional Carrier Requirements, which include the following: Vessel stow plan: No later than 48 hours after departure from the last foreign port, carriers are to submit vessel stow plans to CBP, to include the vessel operator, voyage number, the stow position of each container, hazardous material code (if applicable), and the port of discharge. For a voyage of less than 48 hours (short haul), CBP requires that the stow plan be provided any time prior to arrival at the first U.S. port. See figure 1 for an example of a vessel stow plan. Container status messages: Carriers create CSMs to monitor terminal container movements, such as loading and discharging of vessels; as well as changes in the status of containers, such as if they are empty or full. A carrier is to submit CSMs to CBP no later than 24 hours after the message is entered into the carrier’s equipment tracking system. According to the rule, ISF data are intended to improve CBP’s ability to identify (target) high-risk shipments. The data elements are processed and provided to CBP’s Automated Targeting System (ATS), which is a decision support system that compares cargo and conveyance information against intelligence and other law enforcement data. ATS consolidates data from various sources to create a single, comprehensive record for each U.S.-bound cargo shipment. Among other things, ATS uses a set of rules that assess different factors in the data to determine the risk of a shipment for particular threats, such as national security threats or illegal drug trafficking. For example, one set of rules within ATS, collectively referred to as the maritime national security weight set, is programmed to check for information or patterns that could be indicative of suspicious or terrorist activity. As we have previously reported, the effectiveness of CBP’s security strategy depends on CBP’s ability to use ATS and other tools to effectively target those shipments that pose the greatest security risks. CBP officials (targeters) use information in ATS to identify which shipments to examine, which may include a non-intrusive inspection (NII) scan or a physical inspection. The ATS risk score, however, is not the sole factor that determines whether a CBP targeter reviews the data for a shipment or whether the shipment is selected for a security examination. CBP targeters we spoke with told us that they use the ATS risk score as a starting point for the targeting process, but their decisions are ultimately also based on additional research. CBP targeters are assigned to ATUs located at or near selected domestic ports across the United States. Targeters at the ATUs are to review the information associated with shipments destined for ports within their respective regions to identify those shipments that may be at risk for containing terrorist weapons or other contraband. An ATU may be responsible for targeting shipments arriving at one port or multiple ports within its region. For example, targeters at the Houston ATU are also responsible for targeting shipments that are bound for ports in Freeport and Galveston. CBP targeters at ATUs can review data as soon as carriers and importers submit the required data (in accordance with the 24-hour rule and the ISF rule) and the data are available in ATS. Once a shipment is loaded onto a U.S.-bound vessel, CBP targeters are to continue to review shipment data in ATS because the data can be updated or amended while the shipment is in transit to the U.S. port, resulting in risk score changes. According to CBP policy, targeters at ATUs are required to review data in ATS for all medium-risk and high-risk shipments that are destined to arrive at their respective ports. For example, a targeter may review individual data elements, such as the name of the importer or other supply chain parties, for these shipments. A targeter may also review the weight set rules that detected potential threats and, therefore, contributed to the calculation of the risk score. ATU targeters are also required to hold high-risk shipments for examination unless they can mitigate the risk through additional research or analysis of available information. Targeters may conduct discretionary targeting by running queries of interest for national security purposes or for other efforts, such as counternarcotics. For example, targeters may independently create queries to identify items of interest, such as all shipments of a particular commodity or those coming from a particular country of origin. Targeters also have responsibility for enforcing the ISF rule, and ATUs have discretion in conducting enforcement activities based on the individual characteristics—such as volume of shipments or length of voyage for arriving cargo— of the ports that the ATUs oversee. Submission rates for ISF-10s have generally been high and CBP is taking steps to increase the ISF-5 submission rate. In particular, submission rates for shipments requiring an ISF-10 increased from approximately 95 percent in 2012 to 99 percent in 2015 (see figure 2). According to CBP officials, from the ISF program’s beginning in January 2009, the submission rates for ISF-10s generally rose as CBP gave importers time to adjust to the new requirements. From January 2012 through June 2013, however, the submission rates remained at approximately 96 percent, on average. CBP officials told us that they suspect ISF submission rates did not increase during that time period because some importers had become complacent given that CBP had not yet increased its enforcement actions. After CBP began taking greater enforcement actions, beginning in July 2013, the submission rates increased to approximately 98 percent by the end of 2013, and generally continued to rise to 99 percent by the end of 2015. Representatives from two of the three importers we interviewed, and an association representing customs brokers, told us that the biggest challenge in complying with the ISF rule is depending on third parties to provide the information for the required ISF data elements. A representative for one importer told us his company stationed 23 representatives abroad to educate vendors on the ISF requirements and the penalties associated with filing late and added that he believes these actions have helped increase his company’s ISF submission rate. ISF-5 submission rates were lower than ISF-10 rates during the same time period, ranging from approximately 68 percent in 2012 to 80 percent in 2015 (see figure 3). According to CBP officials, ISF-5 submission rates were lower because, as we previously reported in September 2010, the ISF rule lacked clarity regarding the party responsible for submitting the ISF-5. Specifically, CBP determined that in some cases the rule designated a party as the ISF Importer even though that party had limited access to the ISF data. As a result, CBP determined that it would not be appropriate to enforce the ISF-5 requirement. In July 2016, CBP published a Notice of Proposed Rulemaking, which seeks to address the ISF-5 issue by expanding the definition of ISF Importer to ensure that the party that has the best access to the required information will be responsible for filing the ISF. According to the notice, CBP also proposes expanding the definition of the ISF Importer to include non-vessel operating common carriers for FROB shipments, because when a party uses a non-vessel operating common carrier to book space on a vessel, the vessel carrier frequently does not have access to the required ISF data elements. CBP is also proposing to expand the definition of ISF Importer for immediate exportation shipments, transportation and exportation shipments, and for shipments to be delivered to a foreign trade zone to include the goods’ owner, purchaser, consignee, or agent, such as a licensed customs broker. According to the Notice of Proposed Rulemaking, by broadening the definition to include these parties, the responsibility to file the ISF will be with the party causing the goods to arrive in the United States that will most likely have access to the required ISF information. CBP estimates it will publish the final rule in December 2017. We were not able to determine submission rates for the two additional carrier requirements—vessel stow plans and CSMs—for 2012 through 2015. CBP provided us data on vessels that arrived in the United States with vessel stow plans on file during this time period, but the data did not include vessels that arrived in the United States and did not submit vessel stow plans. As a result, we were not able to determine carriers’ compliance with the requirement to submit vessel stow plans. CBP provided examples of daily reports it produced calculating the acceptance rate of vessel stow plans submitted, but it has not comprehensively calculated submission rates over time. According to CBP officials, carriers’ overall compliance overall with stow plan submissions is likely nearly 100 percent given that targeters at ATUs follow up with carriers prior to vessel arrival if they have not yet submitted the vessel stow plan. Similar to vessel stow plans, CBP provided us data on the number of CSMs it receives, but is not able to produce data on the number of CSMs it should have received. Carriers generate CSMs in their individual data systems to capture movements and status changes and CBP officials told us they do not have direct access to carriers’ private data systems to know if a CSM has been created and is, therefore, required to be submitted. As a result, we were not able to determine carriers’ compliance with the requirement to submit CSMs. CBP has processes for monitoring daily whether importers and carriers have submitted required ISFs and vessel stow plans, but not CSMs. In particular, CBP headquarters officials told us they review daily reports on ISF-10 and ISF-5 submission rates at each U.S. port to monitor the overall level of compliance with the ISF requirement. For all shipments scheduled to arrive at U.S. ports in approximately 2 days, CBP calculates the percentage of shipments that have ISFs. For example, for shipments scheduled to arrive in the United States on September 20, 2015, CBP generated a report on September 18, 2015, that indicated that 21,114 shipments out of 21,593 shipments (about 98 percent) requiring ISF-10s had an ISF. Additionally, four of the five ATUs we visited conduct queries in ATS to identify shipments arriving in the near future without ISF-10s. Similar to ISFs, CBP generates daily reports on vessels scheduled to arrive in the United States without vessel stow plans on file. Also, all five ATUs we visited have a process to identify arriving vessels with missing stow plans and coordinate with the responsible carriers to obtain those stow plans prior to the vessels arriving at their first U.S. port. CBP officials stated that they are not able to comprehensively monitor CSM submissions because, as previously discussed, CBP does not have access to carriers’ private data systems to know if a CSM has been created and if it was provided to CBP within 24 hours of being entered in the carrier’s system. However, as we observed during our ATU visits, targeters can identify if CSMs were not sent to CBP based on their current knowledge of a container’s location when reviewing other sources of information. CBP primarily uses two types of enforcement actions—ISF holds and liquidated damages claims—to enforce compliance with the ISF rule among importers and carriers. An ISF hold can prevent a shipment from leaving the U.S. port of arrival, and an LDC is similar to a monetary fine or penalty. Upon implementation of the ISF rule in January 2009, CBP delayed enforcement for 1 year to give the trade community time to adjust to the rule’s requirements. In January 2010, CBP extended the period of delayed enforcement while beginning to take some limited enforcement actions against noncompliant importers by placing their shipments on hold. In July 2013, CBP began full enforcement of the ISF rule by authorizing ATUs to issue LDCs. The use of ISF holds also increased at that time. Figure 4 shows key changes in CBP’s enforcement of the ISF rule over time. ISF Holds: In June 2010, CBP authorized ATUs to hold all shipments with no ISF-10 on file. Depending on an ATU’s individual enforcement policy, the shipment could remain on hold until an ISF is filed, be scanned by NII equipment, or be physically inspected. For example, two of the five ATUs that we visited do not remove an ISF hold from a noncompliant shipment until the ISF-10 is submitted. Another ATU that we visited sends shipments for physical inspection if an ISF-10 has not been submitted within 96 hours of a shipment’s arrival. On the basis of our analysis of CBP data, from 2012 through 2015, ATUs placed approximately 181,000 shipments on ISF hold, representing about 20 percent of shipments arriving at U.S. ports without an accepted ISF-10. Figure 5 shows the number of shipments ATUs placed on hold from 2012 through 2015. Liquidated Damages Claims: In July 2013, CBP authorized ATUs to issue LDCs to noncompliant importers and carriers for failure to submit ISFs, vessel stow plans, or CSMs to CBP. The specific amount of an LDC depends on the type of violation. For example, late submission of an ISF- 10 can result in a $5,000 LDC, while late filing of a vessel stow plan can result in a $50,000 LDC. From May 2014 through June 2016, before imposing a LDC on an importer, an ATU had to document three prior violations, give a warning to the importer for each violation, and obtain CBP headquarters’ approval. In June 2016, CBP authorized ATUs to issue LDCs to importers without documenting three prior violations or obtaining headquarters’ approval. LDCs for carriers still require headquarters’ approval but do not require an ATU to document three prior violations. From 2013 through 2015, ATUs issued 67 LDCs to 20 importers and 12 carriers (see figure 6). While CBP generally enforces the ISF rule requirements to submit an ISF-10 and vessel stow plan, it has not enforced the requirement that carriers submit CSMs. None of the targeters at the ATUs we visited had initiated any enforcement action (i.e., issued an LDC) against carriers for not submitting CSMs, and we found no instances of an LDC issued for CSM noncompliance in our analysis of CBP’s enforcement data. According to CBP policy, CBP’s enforcement strategy is designed to maximize importers’ and carriers’ compliance with the ISF rule, which requires carriers to submit CSMs to CBP no later than 24 hours after the CSM is entered into the carrier’s equipment tracking system. Targeters at four of the five ATUs we visited said that CSMs are useful when assessing the risk of arriving shipments because they provide a detailed history of containers’ movements. For example, targeters can see if a container was routed in an unusual way or transited a high-risk location. Officials at CBP headquarters told us that ATUs do not have enough resources to issue an LDC for each case of CSM noncompliance because of the very high volume of CSMs—as many as 30 million per month—that CBP receives. Officials at one ATU also told us they do not enforce the CSM requirement because CSMs are often out of date. Although it may not be feasible to determine every instance of CSM noncompliance, targeters may identify cases of noncompliance when reviewing CSMs for containers of interest as they target. For example, a targeter at one ATU reviewed a container that had arrived at the port from Guatemala in late April 2016, but the most recent CSM for the container was from early March 2016. Therefore, according to the targeter, CBP likely did not receive the most recent messages from the carrier. CBP could issue LDCs when targeters identify CSM noncompliance during the targeting process. By enforcing the CSM requirement when targeters identify noncompliance, carriers would have a greater incentive to submit all CSMs, thus providing CBP targeters with more comprehensive information that could help them better assess the risk of cargo shipments arriving at U.S. ports—the key goal of the ISF program. Using CBP data on ISF holds and ISF-10 submission rates, we analyzed how CBP’s use of holds as an enforcement method was associated with ISF-10 submission rates during calendar years 2012 through 2015. Our analysis found that, nationally, the ISF-10 submission rate increased after July 9, 2013, when CBP began its period of full enforcement of the ISF rule and ATUs increased their use of ISF holds (see appendixes I and II). Nationally, the ISF-10 submission rate was about 1.7 percentage points higher on the 30th day after CBP began full enforcement, compared to the day before the policy change. Further, our analysis of CBP data found that ISF-10 submission rates varied across individual ports overseen by ATUs that primarily used LDCs or did not use any enforcement method. Submission rates at the two ports overseen by the ATU that used the most LDCs and comparatively few ISF holds of the ATUs we visited, remained relatively consistent at about 95 percentage points before and after July 9, 2013. Additionally, the ISF-10 submission rates at these ports were lower at various times from July 2013 through 2015 in our analysis, when compared to the rates at the ports overseen by the other four ATUs we visited. Similarly, the ISF-10 submission rate at the port overseen by an ATU we visited that generally did not take any enforcement actions against noncompliant importers was consistently lower, by approximately 2 to 15 percentage points, than the rates at the ports overseen by the other four ATUs we visited. Nevertheless, submission rates at this ATU increased after July 9, 2013, when CBP began full enforcement, similar to the patterns at the ports with the highest submission rates overseen by three of the ATUs we visited. This increase in submission rates after full enforcement began, without the ATU’s explicit use of holds, suggests that CBP’s broader enforcement policy may have had an implicit deterrent effect. CBP officials said CBP has not assessed the effects of its enforcement actions—ISF holds and LDCs—including how its enforcement strategy could be used to maximize importers’ and carriers’ compliance with the ISF rule. CBP officials told us that, nationally, the ISF submission rate is high—at around 99 percent—and that they credit the overall rise in submission rates since 2009 to CBP’s enforcement efforts. However, submission rates vary at individual ports overseen by ATUs that enforce the ISF requirement differently. Some ATUs use holds and others use a combination of holds and LDCs. ATUs also use different criteria for when they place a hold on a noncompliant shipment or issue an LDC. Some ATUs place holds on shipments without an ISF 24 hours before the vessel arrives at the U.S. port, while other ATUs place holds on shipments 48 or 72 hours before the vessel arrives at the port. Further, ATUs apply different consequences to holds, such as using the hold to take an image of a container’s contents or physically inspecting the contents of a shipment. According to CBP policy, the objective of CBP’s enforcement strategy is to maximize importers’ and carriers’ compliance with the ISF rule. However, officials said that CBP has not assessed whether its enforcement actions are helping achieve the agency’s objective of maximizing compliance, particularly among those ports with relatively low compliance rates. For example, officials said CBP has not conducted an evaluation to determine whether a particular enforcement action or consequence of that action is more effective than another. CBP officials said that compliance is already high, with an average national ISF-10 submission rate of about 99 percent. While the national submission rate is high, some ATUs oversee ports with relatively low submission rates. It is possible that submission rates might have been higher at individual ports if CBP had used different enforcement approaches. In a previous report, we reviewed various methods of evaluating programs and found that program evaluations may be needed to examine the extent to which programs are achieving their objectives. Specifically, outcome evaluations can be used to assess program processes to understand how outcomes are produced. We discussed with CBP officials different types of evaluations, such as case studies of individual ports, that would be feasible for it to conduct to evaluate ATUs’ different enforcement methods. An evaluation of the effectiveness of its enforcement actions could help inform CBP’s enforcement strategy and increase compliance at ports with relatively low ISF-10 submission rates. Without such an evaluation at the port level, CBP cannot be assured that its enforcement strategy is meeting the objective of maximizing compliance with ISF rule requirements. CBP officials told us that ISF rule data have improved CBP’s ability to assess the risk of cargo shipments, but evaluating the direct effects of ISF rule data on identifying high-risk shipments is difficult. However, we identified examples of additional information CBP could collect to better evaluate the program’s effectiveness. When assessing the risk of U.S.- bound cargo shipments, CBP relies, in part, on the use of ATS, as described earlier. In January 2011, CBP incorporated ISF data into ATS’s maritime national security weight set and since 2011 CBP staff have assessed the performance of the updated weight set against a performance target on a quarterly basis. The results of these assessments show that in 11 of the 12 quarters during calendar years 2013 through 2015, the maritime national security weight set performed better than a random inspection of shipments in identifying contraband. However, determining the direct effect of ISF data on the identification of high-risk shipments is not always possible because a shipment’s risk score could be based on a variety of factors other than ISF data. As a result, it is difficult to know the full effect of ISF data alone in identifying shipments that ultimately contained contraband. According to CBP targeters we spoke with, for shipments that ATS identified as high-risk, having the ISF data early in the targeting process, such as names and addresses, and more specific descriptions of cargo than what a manifest provides, helps them better research shipments. Also, some targeters we spoke with have used the ISF data to conduct discretionary targeting and identify shipments for examination that were not already identified by ATS as high risk. According to CBP, vessel stow plans also help CBP assess shipment risk by allowing CBP to identify unmanifested containers—containers and their associated contents not listed on a vessel’s manifest—that pose a security risk in that no information is known about their origin or contents. CBP prepares daily reports identifying unmanifested containers arriving in the United States. Further, according to CBP, CSMs help with shipment risk assessments by providing CBP with information about containers’ movements and their status (i.e., empty or full) that could indicate heightened security risks. While CBP officials told us it is difficult to evaluate the direct impact of ISF rule data in identifying high-risk shipments, collecting additional performance information could help CBP assess and demonstrate whether ISF rule data are contributing to the program’s goals. In our 2012 report addressing different types of evaluations for answering varied questions about program performance, we found that a good evaluation design should identify data sources and collection procedures to obtain relevant, credible information to determine how well a program is working. CBP, according to ISF program officials, has not evaluated the effectiveness of the program because it believes that compliance is already quite high, including a 99 percent submission rate for ISF-10s. Although submission rates can be helpful in determining the extent to which the required ISF data are being provided to CBP, it is important to also demonstrate how or whether the ISF rule data are actually achieving the broader program goal of improving CBP’s ability to assess cargo shipments’ risks. For example, tracking the number of unmanifested containers that ATUs discover as a result of reviewing vessel stow plans could better reflect one benefit of the program. Additionally, identifying instances in which ATUs discover or seize contraband as a result of targeters reviewing ISF rule data when conducting discretionary targeting would provide CBP with examples of how the data result in the identification of high-risk shipments. By identifying and collecting such additional information, CBP could better determine whether or how ISF rule data are improving its ability to assess cargo shipment risks and provide greater assurance that the ISF program, including the resources invested, is helping to achieve intended goals. Identifying and collecting additional performance information could also provide CBP with useful information when evaluating the effectiveness of the ISF program when it conducts its upcoming, required retrospective review. In accordance with the Regulatory Flexibility Act, CBP is required to evaluate the ISF program in 2018, as part of a 10-year retrospective review. We previously reported practices identified by federal agencies and nonfederal parties that could aid in the facilitation of useful retrospective reviews, including preplanning to identify data and analysis needed to conduct effective reviews. CBP officials told us they expect to begin planning this year for the 2018 review. Our analysis of ISF data submitted to CBP from 2012 through 2015 showed that some ISFs had missing or invalid country of origin codes— one of the 10 data elements required in an ISF-10. The number of missing and invalid codes is very small relative to the total number of ISF- 10s accepted during this time period, but as one of the ISF data elements used to determine a shipment’s risk score, it is essential that valid country of origin codes are fed into ATS. We discussed the results of our analysis with CBP officials and, according to CBP, in December 2016, CBP updated the validation rules used by its Automated Commercial Environment system so that the system will no longer accept an ISF unless it includes a valid, allowable country of origin code. We believe the actions that CBP has taken should resolve the invalid country of origin code problem we identified. By implementing the ISF rule, CBP sought to reduce vulnerabilities in supply chain security by requiring importers and carriers to submit advance data that would help CBP better assess the risk of cargo shipments prior to their arrival at U.S. ports. CBP has taken steps to monitor and enforce the submission of ISFs and vessels stow plans required by the ISF rule, and uses ISF rule data when assessing the risk of arriving cargo shipments. However, CBP could take actions to better enforce compliance and evaluate the effectiveness of the ISF program. For example, by enforcing the requirement that carriers provide CSMs when targeters identify noncompliance, CBP would have more accurate and timely information for its targeters to use in identifying high-risk shipments. The ISF program could also benefit from an evaluation of the effectiveness of ATU’s enforcement methods since determining and implementing the most effective enforcement strategy could increase compliance with the ISF rule at ports with relatively low submission rates. Further, collecting ISF program performance information would allow CBP to better evaluate whether and how effectively the ISF program is meeting its intended goal of improving the identification of high-risk cargo shipments. To enhance CBP’s identification of high-risk cargo shipments and its enforcement of the ISF rule, we recommend that the Commissioner of CBP take the following two actions: enforce the ISF rule requirement that carriers provide CSMs to CBP when targeters identify CSM noncompliance; and evaluate the ISF enforcement strategies used by ATUs to assess whether particular enforcement methods could be applied to ports with relatively low submission rates. Further, we recommend that the Commissioner of CBP identify and collect additional performance information on the impact of the ISF rule data, such as the identification of shipments containing contraband, to better evaluate the effectiveness of the ISF program. We provided a draft of the sensitive version of this report to DHS for its review and comment. DHS provided technical comments, which have been incorporated into this report, as appropriate. DHS also provided written comments, which are reprinted in appendix III. In its comments, DHS concurred with the report’s three recommendations and described actions it has planned to address the recommendations by February 28, 2018. DHS concurred with the first recommendation and stated that CBP plans to develop a CSM enforcement policy and, once developed, plans to disseminate the updated enforcement guidance to ATUs. DHS concurred with the second recommendation and stated that CBP will discuss the ISF enforcement strategies used by ATUs during monthly conference calls and will work with ATUs overseeing ports with lower ISF submission rates to identify potential solutions to increase submission rates at those ports. DHS concurred with the third recommendation and stated that it will analyze ISF data from a targeting standpoint to evaluate program performance. Among other things, CBP plans to determine the number of times potential terrorism matches were made against ISF data that were not identified using manifest data. If implemented as planned, these actions should address the intent of the recommendations to improve CBP’s enforcement and assessment of the ISF program. We will continue to monitor CBP’s efforts in addressing these recommendations. If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this reported are listed in appendix IV. This report addresses U.S. Customs and Border Protection’s (CBP) implementation of the Importer Security Filing (ISF) and Additional Carrier Requirements (ISF rule). More specifically, our objectives were to address: (1) importers’ and carriers’ compliance rates for ISF rule requirements, and the extent to which CBP monitors their compliance; (2) CBP’s actions to enforce the ISF rule and whether its enforcement actions have contributed to increased compliance among importers and carriers; and (3) whether the ISF program has improved CBP’s ability to identify high-risk cargo shipments prior to their arrival in the United States, and the extent to which data submitted under the program are accurate. To determine importers’ and carriers’ submission rates for ISF rule requirements—ISFs, vessel stow plans, and container status messages (CSM)—we obtained CBP data on importers’ and carriers’ compliance with the ISF rule. Specifically, we analyzed CBP’s ISF data to determine national submission rates for ISF-10s and ISF-5s, by month, from January 2012 through December 2015—the 4 most recent years for which data were available at the time of our review. To assess the reliability of CBP’s ISF data, we reviewed the data for obvious errors, such as duplicative or missing fields. We discussed with CBP officials how ISF data are processed and maintained. We also discussed with officials from the Advance Targeting Units (ATU) we visited the reliability of the ISF submission rates for their respective ports. We determined the data were sufficiently reliable to illustrate the national ISF submission rate and for the ports overseen by the ATUs we visited. However, we determined that the data were not sufficiently reliable for determining ISF submission rates at all individual ports because CBP’s data also included shipments associated with the wrong port or with a land port or airport as submitted by carriers to CBP. We also obtained CBP data on vessel stow plan submissions; however, we were not able to determine submission rates because CBP was not able to provide us data on vessel stow plans that were required, but ultimately not submitted to CBP. Further, we could not determine submission rates for CSMs because CBP could provide us data on the number of CSMs it received from carriers, but not those it did not receive because CBP does not have access to carriers’ private systems to know when CSMs have been created and should be provided to CBP. To determine the extent to which CBP monitors importers’ and carriers’ compliance with ISF rule requirements, we reviewed daily ISF and stow plan reports used by CBP officials to monitor compliance. We also interviewed CBP officials from the Office of Field Operations (OFO), including the Office of Cargo and Conveyance Security, National Targeting Center-Cargo (NTC-C) and selected ATUs. We selected five ATUs responsible for shipments arriving at eight U.S. ports to reflect ports with a range of ISF submission rates. We used ISF-10 submission rates rather than ISF-5 submission rates as our primary selection criterion because CBP was not enforcing ISF-5 compliance at the time of our review. We selected ATUs based on calendar year 2015 data because it represented the most recent year for which full year data were available at the time of our selection, and CBP officials located at the ATUs selected would likely be more able to provide insights on 2015 data than previous years’ data. Although the results from our visits to the five ATUs are not generalizable to all targeting units, the visits provided us insights regarding how and when ATU officials monitor compliance for the requirements of the ISF rule and the factors that may affect a port’s submission rates. We also interviewed a nongeneralizable sample of three importers, three vessel carriers, and three trade industry associations to understand their ability to comply with the ISF rule requirements. Specifically, we asked importers, carriers, and members of the trade industry about the steps they took to comply with the ISF requirements and the factors that may affect compliance with any of the requirements. We selected importers and carriers who had experienced varying levels of CBP enforcement. We selected trade industry associations based on recommendations from CBP and our prior work on cargo security (see below for more detail on our selection criteria). To determine the extent to which CBP has taken actions to enforce the ISF rule and assessed whether its enforcement actions have contributed to increased compliance, we compared CBP’s actions to enforce the ISF rule and assessments of its actions against CBP’s enforcement goals and criteria on conducting outcome evaluations. We reviewed relevant statutes and CBP policies, including CBP guidance to ATUs on enforcing the ISF rule. We spoke with CBP OFO officials from the Office of Cargo and Conveyance Security; NTC-C; and Office of Fines, Penalties, and Forfeitures to understand the steps CBP has taken to enforce the ISF rule and assess the effect of its enforcement actions. The five ATUs we visited are responsible for ports with varying ISF submission rates and were also selected because they used varying enforcement methods. Although the results from our visits to these five ATUs are not generalizable to all ATUs across the United States, the visits allowed us to understand how individual ATUs enforce the ISF rule given the discretion provided by the ISF program. We obtained CBP data on ISF holds and liquidated damages claims (LDCs), which are the two types of enforcement actions that ATUs primarily use to enforce compliance with the ISF rule. Specifically, we analyzed hold data to determine the number of holds used by ATUs from 2012 through 2015, the same time period we used to analyze submission rates. We analyzed CBP’s data on LDCs to determine the number of LDCs that ATUs issued for ISF rule noncompliance, as well as the monetary amounts that CBP assessed and collected. We analyzed LDC data from July 2013 through 2015 because CBP authorized ATUs to use LDCs beginning in July 2013, and 2015 was the last full calendar year for which data were available. To assess the reliability of CBP’s enforcement data, we reviewed the data for obvious errors, such as duplicative or missing fields; performed a physical case file review of several cases of LDCs at ATUs that we visited; and discussed with CBP officials the results of our reviews. We also discussed with CBP officials how the hold and LDC data are entered and maintained in the Cargo Enforcement Reporting and Tracking System and the Seized Assets and Case Tracking System, respectively. We found CBP’s data on ISF holds and LDCs to be sufficiently reliable for reporting the number of holds and LDCs used by ATU, and for selecting ATUs to visit. We analyzed the effectiveness of ISF holds for enforcement by developing a statistical model estimating the relationship between ISF holds and the rate at which importers submitted required ISF-10s. To develop the statistical model, we matched data on all shipments that required ISF-10s from calendar years 2012 through 2015 to data on whether importers submitted ISF-10s and whether ATUs placed ISF holds on shipments. This 4-year time period spanned the date when CBP increased enforcement of ISF-10 submissions through holds in July 2013, which allowed us to assess how ISF holds were associated with changes in ISF-10 submission rates. We analyzed the effectiveness of ISF holds at each of the five ATUs we visited to determine whether there were any differences in effectiveness at the ports overseen by those ATUs. Although we found that ISF data are not reliable for each port, we determined the data to be sufficiently reliable for our analysis of enforcement at the ports we visited after ATU officials validated their particular data. We could not analyze LDCs because CBP has issued too few LDCs for us to reliably assess their association with ISF submission rates. Appendix II provides technical details on the statistical methods we used. We also interviewed a nongeneralizable sample of three importers, three vessel carriers, and three industry associations to obtain insight on the trade community’s views of CBP’s enforcement of the ISF rule. We selected importers and carriers that had experienced ISF holds and LDCs during calendar years 2013 through 2015. Specifically, we selected two importers with a consistently high number of holds and one importer with a declining number of ISF holds. We selected three carriers, including (1) the carrier that received the highest number of LDCs among those carriers that received LDCs; (2) the carrier that paid the highest total monetary amount to CBP for LDCs; and (3) the carrier with the second- highest number of LDCs, which also paid the second-highest monetary amount to CBP. We selected trade industry associations that represent importers, exporters, non-vessel operating common carriers, and vessel carriers based on recommendations from CBP and our prior work on cargo security. To determine the extent to which the ISF program has improved CBP’s ability to identify high-risk cargo shipments prior to their arrival in the United States, we reviewed available performance data. Specifically, we reviewed the results of CBP’s quarterly performance assessments of ATS’s maritime national security weight set with ISF data incorporated, for calendar years 2012 through 2015. We excluded the 2012 results because there were limited data to evaluate, resulting in greater uncertainty in the measurement of weight set performance for that year. We were not able to determine the direct effect of ISF data on the identification of high-risk shipments because there are a variety of factors in addition to ISF data that can affect a shipment’s risk score. We discussed with CBP officials their plans for review and assessment of the ISF program consistent with regulatory requirements that call for CBP to conduct a retrospective review of the rule. We also reviewed our prior work on the importance of pre-planning to identify data needed in advance of conducting a retrospective review. We interviewed CBP officials and targeters at the five ATUs we visited to obtain insight on how ISF rule data are used to help assess the risk of arriving cargo. To examine the extent to which the data submitted under the ISF program may be accurate, we analyzed the ISF data for calendar years 2012 through 2015 to assess the accuracy of country of origin data submitted to CBP. Specifically we compared country of origin data contained in the ISF-10s to CBPs legend of legitimate country of origin codes. We discussed with CBP officials the ISF validation that occurs under its legacy Automated Commercial System and the validation changes incorporated in to the newer Automated Commercial Environment, designed to prevent acceptance of ISFs with missing or erroneous country of origin codes. The performance audit upon which this report is based was conducted from November 2015 to May 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with CBP from May 2017 to July 2017 to prepare this version of the original sensitive report for public release. This public version was also prepared in accordance with these standards. We evaluated two recent changes to U.S. Customs and Border Protection’s (CBP) enforcement policies. The first change occurred on July 9, 2013, when CBP gave Advance Targeting Units (ATU), responsible for screening arriving shipments, the option to issue Liquidated Damages Claims (LDC), a type of fine, against shipments that did not comply with Importer Security Filing (ISF) requirements. The new policy also expanded ATUs’ ability to hold cargo shipments without proper ISF-10 submissions at ports of entry. The second change occurred on May 13, 2014, when CBP began issuing three warnings to noncompliant importers before issuing LDCs. In this appendix, we summarize our statistical analysis of CBP administrative data to estimate the association between CBP’s enforcement interventions and rates of ISF-10 submissions and cargo holds. Our target population included 36,137,951 bills of lading and their importers that required ISF-10 submissions from calendar year 2012 through 2015. (A bill of lading is an instrument that allows a carrier to transport merchandise from a shipper to a consignee.) We assigned each bill-importer to the enforcement policy period that applied upon arrival at the United States port of unlading. A disaggregated analysis at the bill-importer level was not feasible, because importer identification numbers were unavailable. The identification number was required to match bills to their importers’ ISF-10 submissions and cargo holds. As a substitute, we analyzed aggregate data by calculating aggregate ISF-10 submission and hold rates, among other statistics, by day for analysis of nationwide data (n = 1461) and by week for analysis of data from specific ATUs (n = 210). Appendix I describes the specific CBP databases we analyzed in more detail. We developed two types of interrupted time-series models of these data, using the “single case” and “comparison group” designs. In a single case design, time series data exist for one cross-sectional unit. In our analysis, the single case was the United States as a whole. This version of our analysis estimated how the submission and hold series would have changed with and without each enforcement intervention. In a comparison group design, the analysis is stratified across several groups that received different levels of the intervention, such as units that did not receive the treatment or kept status quo policies. A true comparison group design was not possible here, because CBP changed enforcement policies for all ATUs at the same times. However, ATUs have exercised discretion under the policy to apply different targeting methods. For example, one ATU primarily used LDCs instead of cargo holds, and another ATU used relatively few holds or LDCs. Accordingly, we conducted a version of the analysis that was stratified across ATUs with sufficient data. This allowed us to estimate how trends and associations varied across ATUs. Our single case analysis used binomial generalized linear models (GLMs) to reflect that the outcomes of interest are counts and proportions of ISF- 10 submissions and holds from a fixed population of bill-importers. Binomial models ensure that predictions and confidence intervals remain within the unit interval. In addition, binomial models naturally accommodate the heteroscedasticity likely to exist in our data, caused by the varying number of bill-importers we used to estimate aggregate statistics at each time. Our models took the following general form, with results from the third and most complex version reported below: Y denotes the number of ISF-10s submitted in time period t, observed from a population of n bill-importers requiring ISF-10s and having submission rate π. T denotes time rescaled to elapsed units since the sample origin, 2012-01-01. P(.) indicates whether the observation falls into either policy intervention period, when 2013-07-09 ≤ T < 2014-05-03 or T≥ 2014-05-03, respectively. m is a vector of 11 month and 6 day of week indicators (when aggregated daily) to allow for cyclical variation (absorbing April and Wednesday into β). We used the model to estimate several quantities of interest for a Wednesday in April (i.e., at β0): π: Estimated ISF-10 submission probability Logit-1(β + (β + β)T) - Logit-1(β + β): Change in probability at time T from counterfactual mean outcome in the absence of intervention 1. Logit-1(β + (β + β)T) - Logit-1(β + β): Change in mean outcome at time T from counterfactual mean outcome in the absence of intervention 1 and 2. Logit-1(β)T) - Logit-1(β+ β): Change in mean outcome at time T from counterfactual mean outcome in the absence of intervention 2 but in the presence of intervention 1. We used Monte Carlo simulation methods to estimate the 99 percent confidence intervals of these quantities. Specifically, letting g(β, x) denote the functions of the parameter and covariate vectors above, we estimated confidence intervals as F-1 is the quantile function (inverse CDF of the sampling distribution) evaluated at p = {.005, .995}, 𝐹𝐹�−1 is a standard empirical quantile estimator, and 𝛽𝛽� is a random vector of 10,000 draws from the estimated covariance matrix of β. We also estimated models stratified by ATU to allow for different enforcement processes at different locations: The models are defined as for the single case, except that the intervention effect parameters are stratified across ATUs. That is, Ytj denotes the number of compliant bills in time period t for ATU, j = {1, 2, … , J}. Aj indicates the jth ATU, except A = 1 for all t. We estimated the same quantities of interest as for the single case, but also estimated differences in these quantities between certain ATUs. We estimated confidence intervals using the same Monte Carlo simulation methods as above. Due to the data reliability problems we discuss in Appendix I, we could reliably link bills, ports, and ATUs only for the ports overseen by the ATUs we visited. We put all other ports into a residual category. We performed several diagnostics to assess model fit and assumptions. We assessed model fit using the model deviance explained, which quantifies how well the covariates explain the variation in the outcome of interest. We assessed the independence of model residuals—a particular concern for time series data—using Breusch-Godfrey test for autocorrelation and the associated estimate of serial correlation at one lag. Table 2 provides the results of these diagnostics for the most complex version of our models. All models fit the data well, explaining at least 90 percent of the deviance. Although we rejected the Breusch-Godfrey test null hypothesis of zero residual autocorrelation (p < 10-5), all models had residual serial correlations less than or equal to 0.23. To adjust for potentially biased variance estimates due to positive autocorrelation, we used a more conservative α = .01. Aggregate time-series analyses of CBP ISF-10 filing data show that policy interventions on July 9, 2013, and May 13, 2014 are associated with significant (α =0.01) increases in submission and hold rates. Table 3 shows submission and hold rate differences from combined analyses at 8 time period contrasts: 30 and 180 days after Policy Intervention 1 compared with 1 day 30, 180, and 540 days after Policy Intervention 2 compared with 1 day 30, 180, and 540 days after Policy Intervention 2 compared with 180 days after Policy Intervention 1 We found significant increases in submission and hold rates after the two policy interventions, at α=0.01 (see table 3). The largest differences in submission rates were estimated when comparing post intervention dates to 1 day prior to policy intervention 1 (increases of 2 to 3 percentage points), with smaller differences between post Policy Interventions 1 and 2. Hold rates showed similar patterns, with larger changes after Policy Intervention 1 (0.6 to 1.2 percentage points). We estimated differences in submission and hold rates for each ATU and at 8 time periods, along with 99 percent confidence intervals. Key results included: We found both significant and nonsignificant decreases and increases in submission rates after Policy Intervention 2 (-1.3 to 3.4 percentage points). Similar patterns generally existed for Hold Rates. As a sensitivity analysis, we fit generalized additive models (GAM) to avoid specifying a linear model for trend ex ante. The general form of the GAMs built upon the models described above such that – π = Logit-1(βX + s)) Where β is the vector of predictors described in the parametric model above, X is the vector of time-dependent covariates described in the parametric model, and s(T) is a smooth function of time to be estimated. The model fits and estimates for GLM and GAM analyses were comparable, suggesting that the GLM results above are robust to linear trends. In addition to the contact named above, Christopher Conrad (Assistant Director), Carla Brown, Lisa Canini (Analyst-in-Charge), Ben Nelson, Ashley Rawson, and Natarajan Subramanian made key contributions to this report. Also contributing to this report were Michele Fejfar, Eric Hauswirth, Susan Hsu, Won Lee, Heidi Nielson, Jeff Tessin, and Wayne Turowski. Maritime Security: Progress and Challenges in Implementing Maritime Cargo Security Programs. GAO-16-790T. Washington, D.C.: July 7, 2016. Supply Chain Security: CBP Needs to Enhance Its Guidance and Oversight of High-Risk Maritime Cargo Shipments. GAO-15-294. Washington, D.C.: January 27, 2015. Maritime Security: Progress and Challenges with Selected Port Security Programs. GAO-14-636T. Washington, D.C.: June 4, 2014. Supply Chain Security: CBP Needs to Conduct Regular Assessments of Its Cargo Targeting System. GAO-13-9. Washington, D.C.: October 25, 2012. Supply Chain Security: CBP Has Made Progress in Assisting the Trade Industry in Implementing the New Importer Security Filing Requirements, but Some Challenges Remain. GAO-10-841. Washington, D.C.: September 10, 2010.
Cargo shipments can present security concerns as terrorists could use cargo containers to transport a weapon of mass destruction or other contraband into the United States. In January 2009, CBP, within the Department of Homeland Security (DHS), implemented the ISF rule. The rule requires importers and vessel carriers to submit information, such as country of origin, to CBP before cargo is loaded onto U.S.-bound vessels. The information is intended to improve CBP's ability to identify high-risk shipments. GAO was asked to review the ISF program. This report addresses: (1) importers' and carriers' submission rates for ISF rule requirements, (2) CBP's actions to enforce the ISF rule and assess whether enforcement actions have increased compliance, and (3) the extent to which the ISF rule has improved CBP's ability to identify high-risk shipments. GAO, among other things, analyzed CBP's compliance and enforcement data for 2012 through 2015—the most recent data available at the time of GAO's review—and interviewed CBP officials and trade industry members. Through the Importer Security Filing (ISF) and Additional Carrier Requirements (the ISF rule), U.S. Customs and Border Protection (CBP) requires importers to submit ISFs and vessel carriers to submit vessel stow plans and container status messages (CSM). Submission rates for ISF-10s—required for cargo destined for the United States—increased from about 95 percent in 2012 to 99 percent in 2015. Submission rates for ISF-5s—required for cargo transiting but not destined for the United States—ranged from about 68 to 80 percent. To increase ISF-5 submission rates, CBP published a Notice of Proposed Rulemaking in July 2016 to clarify the party responsible for submitting the ISF-5. GAO could not determine submission rates for vessel stow plans, which depict the position of each cargo container on a vessel, because CBP calculates stow plan submission rates on a daily basis, but not comprehensively over time. CBP officials noted, though, that compliance overall is likely nearly 100 percent because Advance Targeting Units (ATU), responsible for identifying high-risk shipments, contact carriers if they have not received stow plans. GAO also could not determine submission rates for CSMs, which report container movements and status changes, because CBP does not have access to carriers' private data systems to know the number of CSMs it should receive. CBP targeters noted that they may become aware that CSMs have not been sent based on other information sources they review. CBP has taken actions to enforce ISF and stow plan submissions, but has not enforced CSM submissions or assessed the effects of its enforcement actions on compliance at the port level. ATUs enforce ISF and vessel stow plan compliance by using ISF holds, which prevent cargo from leaving ports, and issuing liquidated damages claims. CBP has not enforced CSM submissions because of the high volume it receives and lack of visibility into carriers' private data systems. However, when CBP targeters become aware that CSMs have not been received based on reviewing other information sources, taking enforcement actions could provide an incentive for carriers to submit all CSMs and help targeters better identify high-risk cargo. GAO's enforcement data analysis shows that ATUs used varying methods to enforce the ISF rule and that ports' ISF-10 submission rates varied. By assessing the effects of its enforcement strategies at the port level, CBP could better ensure it maximizes compliance with the rule. CBP officials stated that ISF rule data have improved their ability to identify high-risk cargo shipments, but CBP could collect additional performance information to better evaluate program effectiveness. Evaluating the direct impact of ISF rule data in assessing shipment risk is difficult; however, GAO identified examples of how CBP could better assess the ISF program's effectiveness. For example, CBP could track the number of containers not listed on a manifest—which could pose a security risk—it identifies through reviewing vessel stow plans. Collecting this type of additional performance information could help CBP better assess whether the ISF program is improving its ability to identify high-risk shipments. This is a public version of a sensitive report that GAO issued in May 2017. Information CBP deemed Law Enforcement Sensitive has been deleted. GAO recommends that CBP (1) enforce the CSM requirement when targeters identify carriers' noncompliance; (2) evaluate the effect of enforcement strategies on compliance at the port level; and (3) collect additional performance information to better evaluate the effectiveness of the ISF program. DHS concurred with the recommendations.
In prior reports, we have identified major risks associated with DOD’s spare parts inventory management practices. In 1996, and then again in 1998, we reported that the Navy’s logistics system often could not provide fleet customers with necessary parts in a timely manner, despite billions of dollars invested in inventory. In 2001, we found that chronic spare parts shortages had degraded combat readiness for selected Navy weapon platforms and had also contributed to problems in retaining skilled maintenance personnel. Navy item managers interviewed for the 2001 report indicated that spare parts shortages resulted from inaccurate spare parts requirements forecasts, as well as contracting problems with private companies and repair delays at military and privately owned facilities. Most recently, in our January 2003 report on major management challenges and program risks, we recommended that DOD take action to address key spare parts shortages as part of a long-range strategic vision and a department wide, coordinated approach for improving logistics management processes. In addition to the risk associated with ineffective spare parts management practices, DOD recently voiced concerns over the adverse impact spare parts shortages have on readiness of weapon systems. In its August 2002 report on its inventory management practices, DOD said that the models it uses to determine inventory purchases are generally biased towards the purchase of low-cost items with high demands, not necessarily the items that would improve readiness the most. The report recommended that the services improve their ability to make inventory purchase decisions based on weapon system readiness. Furthermore, the report recommended that the services’ requests for funds to increase inventory investments be justified on the basis of the corresponding increase in weapon system readiness. The Navy provides the fleet with spare parts through a multitiered inventory system. Retail inventory refers to spare parts that are stored shipside or planeside in accordance with standardized spare parts allowance lists. Retail level spare parts are funded by the Navy’s procurement and operations accounts. Funding for initial outfitting parts is provided by procurement appropriations, while funding for replenishment parts is provided by operations and maintenance appropriations. Wholesale inventory refers to spare parts the Navy buys to replenish retail inventory. Initially Navy program managers tasked with developing weapon systems purchase parts directly from vendors using money from the procurement accounts. However, once a weapon system is fully developed and integrated into the fleet, the Naval Supply Systems Command assumes full responsibility for supporting that system through funding provided by the Navy Working Capital Fund. At this point, fleet customers use funding from outfitting procurement and operations accounts to purchase parts from the Navy’s wholesale inventory. The wholesale system functions as a middleman by purchasing spare parts from vendors with Navy Working Capital Fund dollars, and then reselling these parts to fleet customers. In order to avoid inventory shortages, the wholesale system must accurately forecast demand for spare parts and factor in lead times for procurement and repair actions to mitigate delays in delivery of parts to the fleet. Furthermore, the wholesale system must maintain a cash balance in the Navy Working Capital Fund that approximates 7 to 10 days and, consequently, cannot stock more parts than it expects to resell to the fleet. Sponsor-owned inventory refers to items that program managers purchase with appropriated funds to develop, test, and sustain weapon systems. Program managers store sponsor-owned materials to support work conducted at various locations, including air and sea warfare centers. DOD guidance provides, in part, that when items are no longer needed, they may be returned to the wholesale supply system or reissued to other fleet customers. The Deputy Chief of Naval Operations for Fleet Readiness and Logistics is responsible for strategic planning of logistics functions and ensures that the logistics system supports the Navy’s readiness objectives. The Naval Supply Systems Command develops inventory management policies, determines spare parts requirements, and formulates the Navy Working Capital Fund budget. Within the Naval Supply Systems Command, the Naval Inventory Control Point is assigned primary responsibility for material management tasks, such as computing requirements and providing procurement, distribution, disposal, and rebuild direction. The Naval Air Systems Command, the Naval Sea Systems Command, and the Space and Naval Warfare Systems Command, collectively referred to as the hardware systems commands, interact with the wholesale supply system to ensure that it procures sufficient quantities of spare parts to satisfy the fleet’s allowance requirements. The Navy’s servicewide strategic plans do not specifically address means to mitigate critical spare parts shortages. The Navy’s fiscal year 2001 High Yield Logistics Transformation Plan focused on improving logistics overall, but did not state how the Navy expects to reduce spare parts shortages. Also, while a key subordinate plan developed by the Naval Supply Systems Command has a strategy to ensure the availability of spare parts meets required performance levels; its objectives do not specifically focus on mitigating critical spare parts shortages. This subordinate plan does focus on improving supply availability and reducing customer wait time, but does not specifically address mitigation of spare parts shortages. Although the Navy is developing a new strategy, the Sea Enterprise plan, it has not been published, and therefore we do not know whether it will address ways to mitigate critical spare parts shortages. In fiscal year 2001, the Navy published a servicewide strategic plan—the High Yield Logistics Transformation Plan—that identified initiatives undertaken by its major support commands to improve the service’s logistics overall and to address objectives listed in DOD’s Fiscal Year 2000 Logistics Strategic Plan. While the High Yield Plan contained attributes of an effective strategic plan consistent with the Government Performance and Results Act of 1993 (GPRA), such as long-term goals, objectives, and performance measures, it did not specifically address key objectives for mitigating critical spare parts shortages. The High Yield Plan identified nine major goals, six of which are linked to DOD’s fiscal year 2000 Logistics Strategic Plan, and three that are unique to the Navy. The plan served as a compendium of initiatives undertaken by Navy commands and program offices to improve overall logistics support processes. In total, the plan identified 80 individual initiatives; however, the plan did not contain information that highlighted specific efforts to mitigate spare parts shortages. Navy headquarters officials told us they stopped efforts to report to DOD on the status of the 80 initiatives after DOD published a new logistics strategic plan in June 2002, entitled the Future Logistics Enterprise, that contained several new transformation strategies. The Naval Supply Systems Command Strategic Plan has a strategy to ensure that the availability of spare parts meets required performance levels and includes numerous goals, objectives, and initiatives to improve supply availability. However, this strategy does not specifically focus on mitigating spare parts shortages, nor does it incorporate the objectives of the Navy’s High Yield Transformation Plan. In November 2001, the Naval Supply Systems Command updated its 1999 strategic plan to deliver combat capability through delivery of quality supplies and services on a timely basis. The plan identified 5 major goals, 16 implementation strategies, and 63 individual initiatives. Implementation status of each initiative is recorded in an automated tracking system and briefed to command leadership several times each year. Under its third goal—to achieve and demand the highest quality of service—one of the Command’s strategies is to ensure the availability of spare parts meets required performance levels, but its objectives do not specifically focus on mitigating critical spare parts shortages, nor does the strategy link directly to higher-level DOD and Navy strategic plans. Navy officials told us they expect to start updating the plan during the summer of 2003. Without a focus on mitigating spare parts shortages and linkage to the higher-level plans, the Navy may lack assurance that its overall strategic goals and objectives will be effectively addressed and that its key initiatives will systematically address spare parts shortages. In October 2002, the Navy embarked on a new servicewide strategic planning effort, referred to as the Sea Enterprise, that seeks to improve the efficiency and effectiveness of all aspects of the service’s business operations, including organizational alignments, refining logistics requirements, and reinvesting savings to purchase new weapon systems and enhance combat capability. As of March 2003, the Sea Enterprise plan had not been published, and the extent to which the new plan will address the mitigation of critical spare parts shortages is unclear. Navy documents indicate that officials were reviewing hundreds of ongoing and planned initiatives for improving business operations, and that they planned to select projects with the highest potential savings. The Navy expects to have preliminary project plans and savings estimates available for consideration in the fiscal year 2005 budget deliberations. Once key initiatives are identified for the Sea Enterprise plan, a board of directors will oversee development of implementation plans and monitor progress toward achieving anticipated savings. We reviewed six initiatives that the Navy has undertaken to improve the economy and efficiency of supply support. While some of these initiatives have improved the overall supply availability and reliability of some spare parts, we cannot measure their potential for mitigating critical parts shortages and their impact on weapon system readiness because they were not designed to specifically address this problem. The initiatives included projects to (1) obtain more cost effective and timely support from contractors, (2) improve the efficiency of inventory management practices, and (3) increase the reliability of parts provided to military customers. Performance based logistics contracts have generally improved supply support to the fleet, but the Navy does not assess the extent to which better supply availability mitigates critical spare parts shortages or enhances the fleet’s combat readiness. Through performance based logistics contracts, the Navy has outsourced a broad range of supply support activities that have traditionally been carried out by the Navy’s organic supply system, such as warehousing, repairing and distributing parts, and determining spare parts requirements. According to Navy and interim DOD guidance, the primary objective of performance based logistics is to improve supply support while maintaining or reducing costs. Under more extensive partnerships, contractors may redesign weapon system configurations to optimize system performance, and may also reengineer or replace spare parts to mitigate the effects of scarcity or obsolescence. In the most advanced partnerships, contractors provide technical and engineering support to fleet customers, perform weapon system overhauls, and guarantee timely delivery of quality spare parts to fleet customers. Our review of Navy aggregate and individual program statistics indicated that performance based logistics arrangements have generally improved supply support to the fleet. From January 2001 to July 2002, the Navy’s quarterly supply availability averaged 79.6 percent through a combination of organic and contractor supply support. Without performance based logistics contracts factored in to these data, quarterly supply availability averaged 71.5 percent. We judgmentally examined 10 of 118 active performance based logistics contracts, and found that one contract had no specific vendor performance standards. In 7 of the 9 remaining contracts, we found that vendors either satisfied or exceeded supply support goals. Moreover, for select cases in which data were available for comparison with baseline data, we found that performance based logistics partnerships improved supply support. For instance, one vendor increased availability of parts for an aviation computer system from pre-contract levels of 61 percent to current levels of 100 percent, and filled all 489 outstanding backorders within 13 months after the contract was awarded. Similarly, another vendor increased overall supply availability for the ARC- 210 radio assembly from pre-contract levels of 60 to 70 percent to a current average of 91 percent. Despite positive supply availability effects attributed to performance based logistics contracting, we could not measure the initiative’s overall impact on spare parts shortages. These contracts vary widely in scope and, according to Navy policy, are intended to improve logistics support while maintaining or reducing costs. Consequently, these contracts do not aim specifically to increase the availability of spare parts that experience chronic shortages, and are generally approved only if they can generate savings for the Navy’s wholesale supply system. While Navy officials stated that improved supply support is linked to enhanced equipment readiness, we could not determine whether performance based logistics contracts have mitigated the readiness effects of spare parts shortages. The Navy’s inability to quantify cost savings—or losses—generated by individual contracts impedes the service’s ability to prove the initiative is achieving its objective. Navy and interim DOD guidance specify that each performance based logistics contract is to improve supply support to the warfighter without increasing cost; however, the Navy does not track individual contract savings. Instead, Navy officials approximate aggregate savings attributable to performance based logistics contracting. Although the Navy reports that it has reduced estimated expenditures for spare parts and labor by approximately $100 million for the fiscal year 2000-2005 period, it does not have the information that its leadership and other decision makers may likely need in order to determine whether individual contracts satisfy the initiative’s cost saving objective. Under the Total Asset Visibility initiative, the Naval Supply Systems Command has established asset visibility over a large portion of the service’s spare parts inventories. However, changing completion milestone dates, difficulties in linking data contained in numerous nonstandard automated data systems, and concerns over the lack of top-level management emphasis—including effective business rules and incentives that encourage customers to share parts—have hindered the initiative’s timely and effective implementation. Because of these limitations, the extent to which this initiative will help mitigate critical spare parts shortages and improve weapon system readiness is uncertain. The Supply Systems Command has recognized these difficulties and prepared a long- term plan to centrally manage supply, but the Navy has not yet approved the plan for implementation. The Total Asset Visibility initiative is intended to facilitate redistribution of materials between Navy customers by allowing Navy supply managers to fill critical orders from excess or unneeded stocks held by other Navy customers. DOD’s Material Management Regulation, issued in May 1998, requires the services to provide timely and accurate information on the location, movement, and status of all material assets. The regulation stipulates that wholesale-level inventory managers should have visibility of all in-storage materials, including assets held by military units, maintenance depots, and shipyards. Item managers may use this information to mitigate critical spare parts shortages by redistributing items from one customer’s storage facility to another customer with more urgent needs. In our October 1999 report, we stated that the Navy characterized its Total Asset Visibility program as a “mature” initiative that would be fully implemented by September 2002. To improve the potential for timely and effective implementation, in our October 1999 report we recommended that the Navy establish clearly defined goals, quantifiable performance measures, and implementation milestones to better assess the initiative’s impact on supply system effectiveness. However, the Navy has yet to establish such a plan. At the end of fiscal year 2002, Navy data indicated that the Navy had established asset visibility over 96 percent of the $42 billion inventory that the service had targeted for inclusion under the program. In May 2003, a Navy official stated that this data collection did not target the full range of government-owned materials kept at naval shipyards, aviation repair depots, and commercial contractor facilities. Our work shows that while the Navy supply managers currently have visibility over Navy-managed items held at naval retail storage facilities and most sponsor-owned inventories kept at naval warfare centers, access to unneeded materials held at these locations must be arranged on a case-by-case basis. For example, the Navy has implemented an inventory management visibility system for its retail-level spare parts inventories held aboard ship and at major shore stations. However, these assets are “owned” by the operating fleet commands, and in practice are not subject to redistribution outside the command. An official at the Naval Inventory Control Pointthe activity responsible for management of wholesale level inventories and processing customer requisitionsstated that while they have visibility over retail level inventories held aboard ship and at shore stations controlled by the fleet operational commands, they rarely ask for a part, even though the retail–level inventories may have accumulated parts in excess of local needs. The use of the asset visibility system as a tool for mitigating spare parts shortages between Navy commands could benefit from the development of business rules and management incentives that encourage Navy customers to relinquish control and ownership of unneeded supplies. Progress toward achieving total asset visibility and accountability at some storage locations has been hampered by difficulties in linking data contained in numerous nonstandard information systems. For example, after a 5-year test, the Naval Sea Systems Command terminated efforts to establish centralized visibility and accountability over an estimated $4.3 billion in government-furnished materials provided to commercial shipbuilders. The test was terminated for a variety of reasons, including the lack of common information systems that would allow the transfer of data between commands, the lack of coordinated management emphasis, and difficulties changing legacy contractual reporting requirements. Moreover, at the Naval Air Systems Command, officials stated that their subordinate activities currently record inventory data on four different management information systems. Recognizing current Navy supply system inefficiencies, the Naval Supply Systems Command has proposed a single worldwide inventory management system whereby a national inventory manager would determine requirements for all wholesale inventories, retail ashore, and afloat allowances. The national inventory manager would direct the distribution of materials and maintain day-to-day visibility and control of spare parts inventories regardless of location or funding source. The national inventory manager would also retain ownership of the material until the items were consumed, at which time the stock fund would receive a reimbursement to finance the cost of stock replenishment. At the time of our review, the Navy had not approved the plan. Naval Supply Systems Command representatives believe this concept would eliminate many of the redundancies and inefficiencies in the current inventory management framework. In addition, they said effectiveness of the concept would be dependent upon the full and timely implementation of a common information system shared by all Navy customers regardless of location, or their place in the command hierarchy. Navy officials are planning to replace many of their nonstandard information systems within the next 5 to 10 years. The Navy’s Logistics Engineering Change Proposal initiative has demonstrated potential to enhance equipment readiness by improving the quality of spare parts, and thus reducing the frequency of maintenance actions. However, our work shows that the initiative’s impact may be limited by criteria that require rapid return on investment in spare parts engineering projects and discourage large investments in such projects. By reducing expenditures on low-quality items, this initiative has generated measurable savings for the Navy supply system, and could yield further savings if expanded to include more types of spare parts. The Navy undertook the Logistics Engineering Change Proposal initiative to systematically provide Navy customers with more reliable and less costly spare parts. This initiative’s primary objective is to make up-front investments in high-quality replacement parts as a means of avoiding higher long-term material and labor costs associated with frequent replacement of low-quality items. Through the engineering change proposal process, the Navy identifies items with high failure or turnover rates, and then conducts a logistics and engineering assessment to determine how the quality of these items could be improved. In some instances, parts are reengineered; in other cases, alternative parts are tested for reliability and system compatibility, and then installed to replace lower quality items. To ensure that engineering change proposals offer a cost-effective alternative to standard components, the Navy conducts a cost analysis for each project. To be approved, projects must be expected to realize a 2-to-1 return on investment over the first 5 years after the redesigned part is initially installed in the fleet. We reviewed 21 projects in which reengineered parts had been fully installed in operational equipment. All 13 projects for which comparative performance data were available demonstrated gains in reliability. These reliability improvements implicitly mitigate spare parts shortages and enhance fleet readiness by reducing the frequency of maintenance actions. The Replacement Inertial Navigation Unit—a navigation component installed on P-3 aircraft—illustrates this point. According to Navy documents, the original item was no longer in production, and was costly to maintain due to high failure rates. The replacement model, however, boosted the part’s mean time between failure from 56 to 5,375 hours, and is expected to save the Navy approximately $69.4 million in spare parts expenditures over the lifetime of the project. While material quality improvements resulting from engineering change projects implicitly enhance fleet readiness, we believe that this initiative’s scope and overall impact are limited because of restrictive return on investment criteria. Navy officials told us several potential projects had been rejected in recent years due to insufficient projected return on investment. For example, officials said that a reengineered F-18 navigation component that offered superior reliability over the existing component was rejected because its predicted return on investment would fall substantially below the return on investment threshold. Moreover, they stated that the Navy considered the project’s anticipated first year investment of approximately $155 million unaffordable. Figure 1 illustrates the changes in investment criteria and funding since the inception of the engineering change initiative. As shown, the return on investment expectation ranged from break even in 5 years to the current criterion, which requires a 2-to-1 return on investment over the first 5 years after the redesigned part is initially installed. In addition, the amount of available investment funding declined from more than $100 million in fiscal years 1997 and 1998 to a current total of about $40 million. Because of the long-term nature of these investments, they typically do not yield savings in the early years while initial costs are being incurred. According to the Navy’s most recent assessment, 62 approved aviation projects yielded about $2 million in net savings from fiscal year 1997 through fiscal year 2002. These projects, along with 11 forthcoming ones, are expected to generate additional savings of approximately $785 million from fiscal year 2003 to fiscal year 2010. In addition, Navy officials noted that unmeasured savings may accrue through cost avoidance resulting from reduced maintenance, processing, and transportation of broken or defective items. Navy officials told us that the service is reviewing plans to facilitate project approval by relaxing current return on investment criteria. Management attention to the investment criteria could expand the number of eligible parts, help mitigate spare parts shortages, and increase the readiness return on investment. The Navy’s Serial Number Tracking initiative shows potential to improve supply support, as well as increase fleet readiness, by strengthening controls over in-transit items and facilitating weapons system maintenance. Furthermore, according to preliminary Navy estimates, the Serial Number Tracking initiative will likely generate savings that exceed the costs of program implementation. However, we could not assess its impact on spare parts shortages because the initiative will not be fully implemented until May 2004, and because the initiative’s performance metrics are not designed to measure its impact on spare parts shortages. The Naval Supply Systems Command undertook this initiative in response to the Navy’s Aviation Maintenance Supply Review, which recommended that specific actions be taken to reduce overall maintenance and supply costs, increase readiness, and make systemic improvements in support of naval aviation forces. Since 1990, we have regarded DOD inventory management as a high-risk area because of vulnerabilities to waste, fraud, abuse, and mismanagement. In 1999, we reported that the Navy was unable to account for over $3 billion in inventory that was in-transit within and between storage facilities, repair facilities, and end-users. A business case analysis commissioned by the Naval Supply Systems Command in support of the Serial Number Tracking initiative found that improper accounting of in-transit repair items generates considerable material losses, as well as additional labor costs associated with lost maintenance history data and reconciling records for lost or missing parts. The Navy’s Serial Number Tracking program has potential to enhance the efficiency of maintenance and repair processing in a number of ways. Once the program is fully implemented, parts transferred between Navy customers, storage facilities, and repair sites will be marked with bar codes, which maintenance and supply personnel will scan at every transfer point to record each item’s transit history. Navy customers will then be able to access this information by logging in to a centralized database. The Navy expects this process to minimize the risk of in-transit part loss, as well as the chance of maintenance record errors resulting from manual data entry. In addition to bar coding, the Serial Number Tracking initiative provides for select aviation components to be outfitted with computer chips, called contact memory buttons, that store critical maintenance history and warranty information. As parts circulate through the repair pipeline, maintenance personnel will be able to scan the memory buttons in order to identify what maintenance work has been previously executed, and then determine what additional maintenance actions should be taken. According to the Navy’s analysis, serial number tracking will streamline maintenance work by facilitating identification of maintenance problems and part defects, measurement of part reliability, and investigations of spare part engineering. Moreover, the initiative could reduce time required to complete certain maintenance actions. The Navy has budgeted approximately $58 million over 5 years to implement Serial Number Tracking. This amount includes engineering research to determine which components are compatible with contact memory button technology, installation of contact memory buttons and barcodes, and outfitting maintenance facilities with scanning equipment. Despite these start-up costs, the Navy anticipates that this initiative will yield net savings of more than $193 million over 7 years, resulting primarily from reduced spare parts loss. The Naval Supply Systems Command and its Inventory Control Point staff are implementing a project to redesign and shorten the time required for unserviceable items to be returned to repair facilities. Navy officials told us they anticipate that the reengineered process will reduce the number of unfilled customer requisitions and create efficiencies in the scheduling and repairing of broken parts. At the time of our review, responsibility for overall project management was transitioning from the Naval Supply Systems Command to the Naval Inventory Control Point. Because there is no documented performance plan, the extent to which data will be available to document the initiative’s impact on equipment readiness and mitigation of critical spare parts shortages is unclear. Currently, Navy officials said, the typical unserviceable item is handled and processed 3 to 5 times during an average period of 35.8 days from initial turn-in by the fleet customer to receipt of the broken part at the designated repair activity. The Navy envisions a computer Web-based system whereby a sailor aboard ship can query a computer system and get immediate shipping and packaging instructions. This will reduce the number of shipping destinations and enable the Navy to reduce overall costs. However, without a management plan that specifies performance goals and implementation milestones, the Navy cannot be assured that the initiative will be fully implemented and achieve intended results. The Navy’s use of the Readiness-Based Sparing initiative as a criterion for stocking parts aboard ships appears to have potential for improving critical spare parts availability and operational capability of selected weapon systems. However, according to DOD, because this model is not fully supported by current data collection processes, much of the analysis must be developed off-line. Currently, Navy officials stated that they have used readiness based sparing techniques in determining spare parts allowances in support of some older weapon systems and all new systems being provided to the fleet. The Naval Supply Systems Command is continuing to develop computer models that base allowances for weapon system component parts on readiness considerations. Under the traditional approach, allowances are largely based on historical failure rates of individual parts. The Navy’s new readiness-based models are geared to the operational readiness requirements of selected critical subsystems, and consider how random part failures might adversely affect the ability of the installed component to perform the overall mission. Officials explained that the traditional demand-based sparing model works well for mechanical-type parts, which tend to break down at regular intervals as a result of usage. However, experience has shown that newer electronic components have much less predictable failure patterns. To compensate for this, weapon system designers sometimes build in redundancies that enable equipment to continue working even after random part failures occur. For example, by using the readiness based sparing process, Navy officials anticipate that the operational availability of the Close-In Weapons System will improve from 45 percent under the demand-based approach to 87 percent under the readiness-based allowance model, and the AEGIS system from 24 percent to 91 percent, respectively. The Navy has analyzed how additional wholesale supply funding would affect the availability of spare parts as well as equipment readiness rates, and has determined that an additional investment of $1.2 billion would be necessary to support readiness objectives established by the Chief of Naval Operations. However, the Navy did not ask for this funding as part of its fiscal year 2004 budget request, nor did its budget estimates link planned spending to individual weapon system readiness, as recommended by the Office of the Secretary of Defense in an August 2002 study. DOD has an 85 percent supply availability goal, which means that 85 percent of the requisitions sent to wholesale supply system managers can be immediately filled from on-hand inventories. Navy supply system models are focused on achieving this goal in the aggregate. However, the Navy’s overall wholesale supply system performance has fallen short of expectations in each of the last 3 fiscal years for both aviation- and ship- related repairable spare parts. Supply availability ranged between approximately 69 percent and 71 percent for aviation-related items, and between 79 percent and 84 percent for ship-related parts. Navy officials commented that they have had difficulty achieving the desired 85-percent goal for aviation parts due to a number of reasons, including increased demand stemming from aging weapon systems and accelerated operational requirements. The Navy has estimated that an extra investment in the working capital fund of approximately $1.2 billion would increase aviation- and ship- related spare parts inventories to levels that support current readiness standards. According to a recent study conducted by the Naval Supply Systems Command, constraints in repair pipeline requirement models accounted for a 6 to 8 percent decrease in supply availability for aviation parts, which equated to an estimated 5 to 6 percent decline in fully mission capable rates for naval aircraft. This study concluded that a working capital fund investment of $225 million would remedy wholesale inventory deficiencies resulting from inaccurate requirements models, and that another $688.5 million would prevent further decline in supply availability of aviation spare parts resulting from constraints that prevent the working capital fund from procuring new inventory requirements driven by increased demand. Furthermore, the study calculated that an additional $300 million investment would be required to increase supply availability across all inventory segments to 85 percent. In its budget estimate submitted to Congress in February 2003, however, the Navy did not ask for additional investment in the working capital fund to meet the supply availability and aviation readiness rates described above. At present, it is unclear whether the Navy will choose to request funding for these requirements in later years. In its fiscal year 2004 budget exhibits, the Navy linked its planned working capital fund expenditures to aggregate spare parts availability, but not to mission capable supply rates or other readiness rates for individual weapon systems. The benefit of such a link was cited in an August 2002 study by the Office of the Secretary of Defense, which recommended that service requests for funds for spare parts inventories be linked to specific weapon system readiness. The service did provide aggregate ship and aviation readiness information to the Office of the Secretary of Defense. However, Navy officials said that the service cannot directly link spare parts funding and readiness data by budget category until better information technology becomes available. Without information that links funding to readiness, the Navy’s budget package does not provide Congress the return on readiness investment information it may need to make resource decisions. Since 1990, we have repeatedly reported that DOD’s inventory management practices are high risk. In our 2003 High Risk Series Report we recommended that DOD take action to address key spare parts shortages as part of a long-range strategic vision and a departmentwide, coordinated approach to logistics management. However, our work shows that the Navy currently lacks a servicewide strategic logistics plan and supporting plan that include a specific focus on mitigating critical spare parts shortages. In addition, the Navy’s current key logistics initiatives to improve the efficiency of supply and inventory management practices do not include a specific focus on mitigating these shortages. Instead, these initiatives address many underlying issues, such as reducing customer wait time, increasing asset visibility, improving the management of items turned in for repair, and increasing the reliability of repair parts. Without a focus on mitigating spare parts shortages, the Navy lacks a coordinated approach, with attributes of an effective plan, such as goals, objectives and performance measures, to systematically address the shortages and assess progress in mitigating them. The ongoing development of the Sea Enterprise plan and imminent update of the Naval Supply Systems Command Strategic Plan provide an opportunity to include this focus and provide the coordination needed to ensure that the Navy’s key logistics initiatives we reviewed can achieve their maximum financial and readiness benefits. Lastly, without information that links spare parts funding to individual weapon system readiness and provides assurance that investments in spare parts are based on the greatest readiness returns, such as that recommended in the August 2002 Inventory Management Study, Congress and other decision makers cannot determine how best to prioritize and allocate future funding. We recommend that the Secretary of Defense direct the Secretary of the Navy develop a framework for mitigating critical spare parts shortages that includes long-term goals; measurable, outcome-related objectives; implementation goals; and performance measures as a part of either the Navy Sea Enterprise strategy or the Naval Supply Systems Command Strategic Plan, which will provide a basis for management to assess the extent to which ongoing and planned initiatives will contribute to the mitigation of critical spare parts shortages, and implement the Office of the Secretary of Defense’s recommendation to report, as part of budget requests, the impact of funding on individual weapon system readiness with a specific milestone for completion. In written comments on a draft of this report, DOD generally concurred with the intent of both recommendations, but not the specific actions. DOD’s written comments are reprinted in their entirety in appendix I. In concurring with the intent of our first recommendation, DOD expressed concern that because spare parts shortages are a symptom of higher-level problems, including the need for more reliable spare parts and more effective life cycle support processes, its management improvement plans must focus on improving the processes, rather than on the symptoms. According to DOD, the Naval Supply Systems Command’s current strategic plan and planned revisions are/will be focused on improving the Navy’s overall supply support processes to ensure that its naval forces have sufficient support to achieve required readiness performance levels. Therefore, DOD does not agree that the Navy needs to modify the Naval Supply Systems Command Strategic Plan or include provisions in the evolving Sea Enterprise strategy that are specifically focused on spare parts shortages. DOD stated that the Navy’s process improvement initiatives are intended to reduce the need for spare parts through the use of more effective inventory management practices aboard ship, standardizing the use of readiness based sparing concepts on board ship and at shore facilities, and developing an effective total asset visibility plan. DOD believes that these efforts will improve the efficiency and effectiveness of the Navy’s supply system and inherently minimize any future shortages of critical spare parts. We disagree that these process improvements alone are sufficient to meet our recommendation. Our report recognizes that the Navy’s logistics plans focus on efforts to improve overall logistics support practices, and upon successful implementation will likely contribute to improved supply availability. Based on our report’s findings, however, we believe that the goals, objectives and milestones of the Naval Supply Systems Command’s strategic plans, or the higher-level Sea Enterprise plan, should include a focus on the mitigation of critical spare parts shortages. Without such a focus the Navy’s efforts to address the problem of critical spare parts shortages are more likely to be duplicative or ineffective. Therefore, we believe implementation of our recommended actions is necessary to ensure improved equipment readiness for the Navy’s legacy and future weapon systems. In concurring with the intent of our second recommendation, DOD stated that the Navy is investing in information systems to help it link inventory investment decisions with weapon system readiness. DOD stated that the Navy will provide information to link weapon system readiness and inventory investments for its major weapon systems as information becomes available. Because the Financial Management Regulation already requires the Navy to submit this information as part of its annual budget submission, DOD stated that more specific direction from DOD is not necessary, and that current Navy actions satisfy the intent of our recommendation. We support the Navy’s actions, but remain concerned that the service has not specified milestones for developing information systems that link readiness and spare parts budget data. Providing this information in a timely manner will strengthen the Navy’s stewardship and accountability of requested funds, and will assist the Congress in making spare parts investment decisions that provide a good readiness return. We have therefore modified our second recommendation to include a provision that the Navy establish completion milestones for implementing the reporting requirement, as discussed above. To determine if the Navy’s strategic plans address spare parts shortages, we obtained and analyzed pertinent spare parts and logistics planning documents. We focused our analysis on whether these strategic plans addressed spare parts shortages and included the performance plan guidelines identified in the Government Performance and Results Act. We interviewed officials in the Office of the Deputy Chief of Naval Operations for Fleet Readiness and Logistics and in the Naval Supply Systems Command to clarify the content, status, and linkage of the various strategic plans. To determine the likelihood that key supply system initiatives will mitigate critical spare parts shortages and improve weapon system readiness, we obtained and analyzed service documentation on six of the initiatives that Navy officials believe are key to the future economy and efficiency of the service’s supply operations. We interviewed officials in the office of the Deputy Chief of Naval Operations, the Naval Supply Systems Command, the Naval Inventory Control Point, the Naval Air Systems Command, and the Naval Sea Systems Command. We obtained and analyzed Navy data pertaining to plans, objectives, performance goals, and implementation status and challenges for each of the six selected management initiatives. To determine the extent to which the Navy can identify the impact of additional investments in spare parts inventories, we interviewed officials and analyzed documents at the Naval Inventory Control Point. We also reviewed the Navy’s fiscal years 2004 and 2005 budget estimates provided to the Congress in February 2003, and considered DOD’s recommendations in its August 2002 Inventory Management Study. However, we did not independently validate or verify the accuracy of the Navy’s supply availability performance data or the analysis that estimated the increased funding needed to achieve the targeted supply system performance. We performed our review from August 2002 through March 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretary of Defense; the Secretary of the Navy; the Director, Office of Management and Budget; and other interested congressional committees and parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365 or Richard Payne on (757) 552-8119 if you or your staff have any questions concerning this report. Key contributors to this report were Glenn Knoepfle, Paul Rades, Barry Shillito, George Surosky, and Susan Woodward.
Since 1990, GAO has identified DOD inventory management as high risk because of long-standing management weaknesses. In fiscal years 2001 and 2002, Congress provided the Navy with more than $8 billion in operations and maintenance funds to purchase spare parts in support of the service's operations. Nevertheless, spare parts availability has fallen short of the Navy's goals in recent years. GAO examined the extent to which Navy strategic plans address mitigation of critical spare parts shortages, the likelihood that key supply system improvement initiatives will help mitigate spare parts shortages and enhance readiness, and the Navy's ability to identify the impact on readiness of increased spare parts investments. The Navy's servicewide strategic plan does not specifically address means to mitigate critical spare parts shortages. Its 2001 plan contained strategic goals, objectives, and performance measures, but the service did not use it to systematically manage implementation of logistics reform initiatives. The Navy is developing a new logistics strategic plan, but this document has not yet been published. Consequently, the service presently lacks an effective top-level plan that integrates a specific focus on mitigating spare parts shortages into its logistics transformation initiatives. Without such a plan, the Navy lacks guidance necessary to ensure its logistics initiatives mitigate critical spare parts shortages. GAO examined six of the key initiatives that the Navy has undertaken to improve the economy and efficiency of its supply system. While some of these initiatives have increased availability of select spare parts, GAO cannot determine their potential to mitigate critical spare parts shortages because they were not designed specifically to remedy this problem. For example, the Performance Based Logistics initiative aims to improve supply support at equal or lower cost by outsourcing a broad range of services. Though the initiative has increased availability of certain items, GAO could not measure the extent to which Performance Based Logistics contracts have mitigated critical spare parts shortages. The Navy has determined that an additional investment of $1.2 billion would be necessary to achieve supply availability levels that support the service's readiness objectives. However, the Navy did not ask for this funding in its fiscal year 2004 budget request, nor did it report linkages between resource levels and readiness rates for individual weapon systems, as recommended by the Office of the Secretary of Defense in 2002. The Navy did provide aggregate readiness data to the Office of the Secretary of Defense, but officials stated that they lacked information technology necessary to link readiness rates by weapon system to budget categories. DOD has an 85 percent supply availability goal, which means that 85 percent of the requisitions sent to wholesale supply system managers can be immediately filled from on-hand inventories. Navy supply system models are focused on achieving this goal in the aggregate. However, the Navy's overall wholesale supply system performance has fallen short of expectations in each of the last 3 fiscal years for both aviation- and ship-related repairable spare parts. Supply availability ranged between approximately 69 percent and 71 percent for aviation-related items, and between 79 percent and 84 percent for ship-related parts.
Iraq is ethnically, religiously, and linguistically diverse. Ethnically, Arabs comprise about 75 percent of the population of Iraq, with Kurds comprising around 15 percent and other ethnic groups, such as Turkoman and Assyrians, comprising the remaining 10 percent. Religiously, Shi’a and Sunni Muslims make up 97 percent of the population of Iraq, with non-Muslim groups—such as Baha’i, Christians, Sabean Mandaeans, and Yazidis—comprising the remaining 3 percent of the population. Some communities may be an ethnic majority but a religious minority (such as Arab Christians), while other communities may be an ethnic minority but a religious majority (such as Shi’a Shabaks). For the purpose of this report, we refer to the following religious and ethnic communities as minority groups: Anglican, Armenian, Assyrian, Baptist, Chaldean, Coptic, Greek Orthodox, Latin Catholic, Presbyterians, Sabean Mandaean, Shabak, Syriac, Turkoman, and Yazidi. Since 2003, Iraq’s minority groups have experienced religiously and ethnically motivated intimidation, arbitrary detention, killings, abductions, and forced displacements, as well as attacks on holy sites and religious leaders. In August 2007, coordinated truck bombings killed some 400 Yazidis and wounded more than 700. In August 2009, a series of attacks in Ninewa province killed almost 100 and injured more than 400 from the Yazidi, Shabak, and Turkoman communities. In February 2008, a Chaldean archbishop was kidnapped and killed—the third senior Christian religious figure to be killed in the city of Mosul since 2006. A series of attacks against Christians occurred in 2010, including an attack in October on a Catholic church in Baghdad that left more than 50 dead and 60 wounded. As a result of such violence, a significant portion of minority groups has fled either to other parts of the country, becoming internally displaced persons, or to neighboring countries, becoming refugees. According to nongovernmental organizations, religious minorities make up a significant portion of those migrating from locations in southern Iraq to locations in northern Iraq, such as the Ninewa plain region. The International Organization for Migration reports that, in 2010, in the provinces of Dahuk, Erbil, and Ninewa, 49 percent, 24 percent, and 35 percent, respectively, of the internally displaced population were Christian. According to nongovernmental organizations, religious minority groups face increased marginalization and are less able to access public services or employment because of ethnic or religious prejudices. The United Nations reports that, between 2003 and 2005, 36 percent of the Iraqis seeking refugee status in Syria were Christian. In 2007, Iraq’s Ministry of Displacement and Migration estimated that nearly half of the minority communities had left the country. According to the U.S. Commission on International Religious Freedom, at least half of the Christians in Iraq have left the country since 2003. Further, the commission reports that since 2003 nearly 90 percent of the roughly 50,000-60,000 Sabean Mandaeans have either fled Iraq or been killed. Concern for Iraq’s minority groups led Congress to issue a series of directives beginning in June 2007 to provide assistance to these groups. These directives are as follows: 2008 directive: In December 2007, for fiscal year 2008, the House Committee on Appropriations directed that not less than $10 million of unobligated Economic Support Fund (ESF) account funds provided in prior fiscal years for Iraq should be used to assist religious minorities in the Ninewa plain region of Iraq. Further, the Committee directed that $2 million of such assistance should be provided for microfinance programs and $8 million for internally displaced families in the Ninewa plain region. 2008 supplemental directive: In June 2008, the Explanatory Statement submitted by the Chairman of the Senate Committee on Appropriations explaining the fiscal year 2008 Supplemental Appropriations Act directed that up to $10 million of funds made available under various accounts, including the Migration and Refugee Assistance account, should be made available for programs to assist vulnerable Iraqi religious and ethnic minorities. Further, the Explanatory Statement directed that the Secretary of State should designate staff at the U.S. embassy in Baghdad to oversee and coordinate such assistance. 2010 directive: In December 2009, in the fiscal year 2010 Consolidated Appropriations Act, Congress directed that up to $10 million of ESF account funds should be made available to continue programs and activities to assist minority populations in Iraq, including religious groups in the Ninewa plain region. 2012 directive: In September 2011, the Senate Appropriations Committee report accompanying the fiscal year 2012 appropriations for the Department of State, Foreign Operations, and Related Programs directed the Secretary of State to submit a report detailing U.S. efforts to help ethno-religious minority communities in Iraq, including assistance to build an indigenous community police force and to support nongovernmental organizations in the Ninewa plain region. As of November 2011, USAID and State reported to Congress that they had provided about $40 million in assistance for minority groups in Iraq in response to these directives. According to the agencies, USAID provided $14.8 million for the 2008 directive; USAID and State provided $10.4 million for the 2008 supplemental directive; and State provided $16.5 million for the 2010 directive. In its report to Congress, in response to the 2008 directive, USAID officials identified projects that they believed were in support of minority groups from six existing programs that were designed broadly to assist all Iraqis. However, our analysis of documents found that USAID could not demonstrate how it met the provisions of the 2008 directive because of three weaknesses. First, USAID documents—specifically, the list of projects the agency submitted to Congress—linked only 26 percent of the $14.8 million in assistance directly to the Ninewa plain region. Second, USAID documents generally did not show whether the projects included minority groups among the beneficiaries of the assistance and, specifically, whether $8 million of assistance was provided for internally displaced families. Third, USAID officials and documents did not demonstrate that the agency used unobligated prior year ESF funds to initiate projects in response to the 2008 directive. According to USAID officials, USAID identified projects from six existing programs that were designed broadly to assist all Iraqis. These six existing programs were implemented countrywide; funded many types of activities; and had broad goals related to stabilizing communities and developing agriculture, the economy, and essential services. Accompanying its report to Congress on the 2008 directive, USAID provided a list of 155 projects totaling $14.8 million of assistance to minority groups. USAID could not provide information on how the agency compiled the list of projects. Table 1 provides a description of the six programs and the reported amount of assistance provided in support of Iraq’s minority groups for the 2008 directive. The $14.8 million in assistance that USAID reported in response to the 2008 directive represented about 1 percent of the $1.5 billion in assistance provided through these six programs. Our analysis of USAID documents found that USAID could not demonstrate that it met the provisions of the 2008 directive because of three weaknesses. First, although USAID reported that it provided $14.8 million in assistance to minority groups through existing programs to meet the 2008 directive, its documents could link only $3.82 million (26 percent) of that amount to the Ninewa plain region. The documents linked $1.67 million (11 percent) of the assistance to areas outside of the Ninewa plain region. USAID documents did not provide sufficient detail to determine the location of the remaining $9.35 million (63 percent). Second, USAID documents generally did not show whether the projects included minority groups among the beneficiaries of the assistance and whether $8 million was provided specifically for internally displaced families. According to USAID officials, the agency generally did not track its beneficiaries by religious affiliation. For $14.7 million of the $14.8 million in assistance, USAID documents did not provide sufficient detail for us to determine that Iraqi minority groups were among the beneficiaries of all of the projects. Only 1 of the 155 projects ($66,707 out of $14.8 million) provided sufficient detail in its documents for us to determine that the assistance was directed to internally displaced families; however, the location of that project was outside of the Ninewa plain region. While USAID documents listed $2 million in funding for a microfinance institution, USAID officials were unable to provide detail on whether all of these loans were disbursed in the Ninewa plain region. Third, USAID officials and documents did not demonstrate that the agency used unobligated prior year ESF funds to initiate projects in response to the 2008 directive. USAID could document that the agency used unobligated prior year funds for two of the six programs after the date of the 2008 directive. However, according to USAID officials, the agency did not use unobligated prior year funds for the remaining four programs. According to USAID and State documents, the agencies approved $26.9 million in assistance for minority groups, primarily through the QRF program.million in response to the 2008 supplemental directive and State approved $16.5 million of assistance in response to the 2010 directive. For the 2008 supplemental directive, the agencies approved assistance in support of minority groups in four provinces. For the 2010 directive, State approved assistance in eight provinces. At least $4.8 million of this assistance was linked to the locations mentioned in the directive. Specifically, both agencies approved assistance totaling $10.3 USAID and State approved 36 projects in response to the 2008 supplemental directive and 90 projects in response to the 2010 directive. QRF projects utilized four funding mechanisms: micro-grants, micro- purchases, grants, and direct procurements. Micro-purchases and micro- grants were used for projects costing up to $25,000; grants and direct procurements were used for projects costing over $25,000. Projects included procuring hospital equipment, paving roads, and constructing water lines, among others and fell into four major categories (see table 2 below). Reported projects varied in cost and scope, ranging from about $2,000 to $1.6 million. For example, in response to the 2008 supplemental directive, USAID reported that it initially approved $1.3 million to assist in the reconstruction of a village that suffered significant damage from a coordinated car-bomb attack. In response to the 2010 directive, State reported that it initially approved $458,000 for a project in a municipality that had an influx of internally displaced Christians. According to State officials, the final disbursed amount of assistance likely will be lower than the amount of assistance initially approved and reported to Congress because many projects cost less than the initial approved estimate. State officials told us that they completed reconciling project disbursed amounts for the QRF program in early March 2012. According to these officials, the final disbursed amount was about the same as the approved amount for the 2008 supplemental directive and $420,000 less than the approved amount for the 2010 directive. State met the provision of the 2008 supplemental directive to designate staff at the U.S. embassy in Baghdad to oversee and coordinate assistance to minority groups. In 2008, the U.S. embassy in Baghdad announced the appointment of a special coordinator for minority issues and has since appointed only senior staff to that position, which is evidence—according to State officials—of State’s prioritization of assistance in support of minority groups. The current special coordinator, who is an ambassador as well as the Assistant Chief of Mission for Assistance Transition in Iraq, told us that he conducts outreach to Iraq’s minority communities, including religious leaders and members of the Iraqi diaspora in the United States. In addition, he said that State organizes dialogues and meetings for Iraqi religious minority group leaders in an effort to improve connections and interactions among Christian minority communities in Iraq. Moreover, in January 2011, the U.S. embassy in Baghdad established a working group for minority issues to further coordinate interagency efforts and outreach to minority communities. This working group, led by the special coordinator, meets on a monthly basis and includes representatives from State, USAID, and the Departments of Justice and Defense. According to U.S. embassy officials, the special coordinator intends to continue to coordinate the U.S. embassy’s efforts in support of minority groups during fiscal year 2012. USAID and State could generally demonstrate how they met the 2008 supplemental and 2010 directives through their use of the QRF program, which served as the primary mechanism for the agencies to categorize, track, monitor, and report on minority directive projects, among others. Specifically, the agencies took the following five steps to provide assistance that supported minority groups through the QRF program: Made minority directive projects one of the goals of the QRF program. As directed by the U.S. embassy in Baghdad, the Office of Provincial Affairs made support of minority groups one of the thematic goals of the QRF program in 2008. Thus, USAID and State initiated new projects through the PRTs in support of this goal at that time. State established the QRF program in 2007 to enable PRTs in Iraq to support local entities through short-term projects to fill gaps that were not funded through existing programs. QRF projects under $25,000 were implemented by PRTs and projects over $25,000 mostly were implemented by USAID or State’s implementing partner. Categorized projects. USAID and State officials categorized projects in the agencies’ respective QRF program databases by thematic goal. The agency officials categorized projects in support of minority groups as “Minority Directive” upon initiation, which allowed them to track these projects for reporting purposes. For the 14 projects that we spot-checked, the agencies were able to provide supporting documents from their databases that included information about the projects and showed that projects were categorized as “Minority Directive.” Further, as a result of categorizing projects, the agencies were able to produce lists that reported the amount of assistance approved in support of minority groups for each directive. These lists also showed that numerous minority groups were beneficiaries. Conducted outreach to identify potential beneficiaries. To inform potential beneficiaries of the availability of assistance through the QRF program, PRTs and the special coordinator for minority issues in Baghdad conducted informal outreach to community members, religious leaders, elected officials, and civil society groups. For example, the special coordinator met with minority group leaders to discuss funding needs for projects, such as promoting private investment opportunities. In addition, the agencies identified potential beneficiaries through existing U.S. military and USAID relationships with Iraqi officials and organizations. Conducted final site visits and prepared close-out reports. According to USAID and State officials, PRTs or the QRF program implementing partner conducted final site visits and prepared project close-out reports. We found that the implementing partner prepared close-out reports for all 14 projects that we spot-checked. In the close-out reports, the implementing partner reported on whether the grant objectives were met and whether the grantee met all of their responsibilities and reporting requirements, among others. In addition, the implementing partner received a final report from the grant recipient that included information on the project’s impact and on its beneficiaries. Because the security situation hindered the agencies’ ability to independently verify the implementing partner’s reports, both agencies relied on American and local PRT staff and, in some cases, the U.S military to verify the implementing partner’s reports through photographs and site visits. However, according to USAID and State officials, PRTs did not always complete or document site visits for all projects. State officials said that site visits by U.S. government personnel could compromise the security of project sites and Iraqi recipients. Conducted third-party assessments. Both USAID and State conducted third-party assessments at the close of their respective components of the QRF program. Completed in 2010, the USAID evaluation concluded that the information reported by the implementing partner was valid and recipients received the equipment that was agreed upon in the grant agreement. As of February 2012, State had not finalized its third party’s QRF program evaluation. During our fieldwork in Iraq, Iraqi recipients told us that assistance reached their communities. The QRF program—which served, among other things, as USAID’s and State’s primary mechanism to provide, categorize, track, monitor, and report on assistance to minority groups in Iraq from 2008 to 2011—ended in December 2011. Further, PRTs—which helped identify and monitor QRF projects—ceased operations during the drawdown of U.S. forces from Iraq during 2011. U.S. forces completely withdrew from Iraq in December 2011. According to USAID and State officials, the two agencies have continued to assist Iraq’s minority groups through the obligation of an additional $28 million in reprogrammed ESF funds from prior fiscal years. USAID officials told us they obligated $18 million through a program for microfinance loans to members of minority groups in September 2011. State officials told us that they obligated $10 million, in September 2011, to support one project outside of Baghdad. According to officials from both agencies, they have mechanisms in place to categorize, track, monitor, and report on assistance to minority groups. According to State officials, State intends to continue providing assistance for minority groups in Iraq in fiscal year 2012. However, the officials could not discuss State’s plans for providing assistance because, as of March 20, 2012, State had not yet determined its funding allocations for Iraq for fiscal year 2012. We provided drafts of this report to State and USAID. Both agencies submitted technical comments on the draft that were incorporated, as appropriate. State did not submit an agency comment letter in response to the draft. In its agency comment letter, USAID remarked that despite GAO’s findings, USAID met the needs of internally displaced persons and religious minorities to a greater extent than is presented in this report (see app. II). However, USAID did not provide additional documentation to support its statement. We continue to believe that USAID could not demonstrate how its reported assistance met the provisions of the 2008 directive. We are sending copies of this report to interested congressional committees, the Secretary of State, and the Administrator of USAID. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Our objectives were to examine the extent to which (1) the U.S. Agency for International Development (USAID) demonstrated that the assistance it reported to Congress met the 2008 directive and (2) USAID and the Department of State (State) demonstrated that the assistance they reported to Congress met the 2008 supplemental and 2010 directives. This report is a publicly releasable version of a prior GAO report, issued in May 2012, that State and USAID had designated Sensitive But Unclassified. To address the first objective, we reviewed the provisions of the 2008 directive and analyzed USAID’s report to Congress and a list of projects summarizing the reported amount of assistance provided in response to the 2008 directive. To determine (1) the amount of assistance that USAID provided in the Ninewa plain region and (2) if minority groups were among the beneficiaries, we analyzed the list of projects and project descriptions to identify locations where possible and beneficiaries where identified. We also reviewed program documents, including program evaluations and contracts. We interviewed USAID officials in Washington, D.C., and Iraq, as well as former USAID-Iraq program managers in Washington, D.C., and via teleconference in Cairo, Egypt. To address the second objective, we analyzed (1) the provisions of the 2008 supplemental and 2010 directives; (2) State’s report to Congress summarizing assistance in response to the 2008 supplemental and 2010 directives; and (3) USAID and State’s project lists. The project lists for the 2008 supplemental and the 2010 directives included information such as the project name, grant identification number, project description, project location, minority group served, and the initial approved estimate of each project’s cost. We asked State to provide us with project lists that included the recipient’s name. State officials told us that they could not provide us with this information due to security concerns. However, we determined that the project lists were sufficiently reliable for our purposes by interviewing agency officials in Washington, D.C., and reviewing the QRF database, in Iraq, that was used to create the lists. To address the second objective, we also (1) interviewed USAID and State officials in Washington, D.C., and Iraq; (2) conducted a spotcheck of project documents, such as proposals and close-out reports; and (3) conducted fieldwork in Baghdad and Erbil, Iraq, in October 2011. For the spotcheck, we judgmentally selected 14 of the 126 projects to include a crosssection of characteristics such as year (2008 or 2010), funding amount, and type of project (i.e., procurement, training, etc.). We also interviewed USAID and State officials in Washington, D.C., and Iraq (including former Provincial Reconstruction Team staff); USAID and State’s implementing partner; and Iraqi recipients of assistance, such as officials of religious and nongovernmental organizations. During our fieldwork, we met with 14 Iraqi recipients of assistance who received funding for 28 of the 126 projects and collectively represented about 30 percent of the $26.9 million in assistance provided in response to the 2008 supplemental and 2010 directives. However, their views are not generalizable to all recipients of this assistance. Due to security constraints, we were able to visit only one project site in Iraq, which is located in Baghdad. This site received one of the largest amounts of funds for the 2008 supplemental and 2010 directives. We conducted this performance audit from June 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Key contributors to this report include Judith McCloskey (Assistant Director), Jenna Beveridge, Lisa McMillen, and Sushmita Srikanth. In addition, Debbie Chung, Martin De Alteriis, Etana Finkler, Mary Moutsos, and Michael Rohrback provided technical assistance.
Since 2003, minority groups in Iraq have experienced religiously and ethnically motivated attacks, killings, and forced displacements. Concern for Iraqi religious and ethnic minorities led various congressional committees and Congress as a whole to issue a series of directives to provide assistance to these groups. The 2008 directive directed that $10 million of unobligated ESF funds from prior years be provided to assist religious minorities in the Ninewa plain region of Iraq. The 2008 supplemental and 2010 directives directed that up to $10 million be provided to assist religious and ethnic minority groups in Iraq for each directive. USAID and State reported to Congress that they met the provisions of these three directives by providing $40 million in assistance to Iraqi minority groups. Congressional requesters asked GAO to examine the extent to which (1) USAID demonstrated that the assistance it reported to Congress met the 2008 directive and (2) USAID and State demonstrated that the assistance they reported to Congress met the 2008 supplemental and 2010 directives. To address these objectives, GAO analyzed documents and interviewed officials from State and USAID in Washington, D.C., and Iraq. This report is a public version of a Sensitive But Unclassified report issued in May 2012. GAO is not making recommendations. Both agencies provided technical comments on the draft that were incorporated, as appropriate. State did not submit an agency comment letter in response to the draft. In its agency comment letter, USAID stated that it met minority groups’ needs to a greater extent than is presented in the report. GAO continues to believe that USAID could not demonstrate how its reported assistance met the provisions of the 2008 directive. GAO found that the United States Agency for International Development (USAID) could not demonstrate how the projects that it reported to Congress met the provisions of the 2008 directive because of three weaknesses. First, USAID documents—specifically, the list of projects the agency submitted to Congress— linked only $3.8 million of the $14.8 million in assistance (26 percent) directly to the Ninewa plain region. Second, USAID documents generally did not show whether the projects included minority groups among the beneficiaries of the assistance and specifically whether $8 million of assistance was provided for internally displaced families. Third, USAID officials and documents did not demonstrate that the agency used unobligated prior year Economic Support Fund (ESF) funds to initiate projects in response to the 2008 directive. USAID and the Department of State (State) generally could demonstrate how they met the 2008 supplemental and 2010 directives. According to USAID and State documents, the agencies approved $26.9 million in assistance—primarily in essential services and humanitarian assistance—to meet the 2008 supplemental and 2010 directives’ provisions to spend up to $10 million for each directive to assist religious and ethnic minority groups in Iraq (see figure below). In addition, as directed by Congress, the U.S. embassy in Baghdad designated staff at the embassy to oversee and coordinate assistance to minority groups in 2008. Using the Quick Response Fund (QRF) program, USAID and State took five steps that generally demonstrated how they met the 2008 supplemental and 2010 directives. First, the U.S. embassy in Baghdad directed that support of minority groups be made one of the thematic goals of the QRF program in 2008. Second, USAID and State categorized projects in their respective QRF databases by thematic goal. Third, the U.S. embassy in Baghdad and its Provincial Reconstruction Teams (PRTs) conducted outreach to inform potential beneficiaries of the availability of assistance through the QRF program. Fourth, PRTs or the QRF implementing partner conducted final site visits and prepared project close-out reports. Fifth, both USAID and State conducted third-party assessments at the close of their respective components of the QRF program. The QRF program closed and the PRTs ceased their operations by the end of 2011, as planned. According to USAID and State officials, the two agencies continue to assist minority groups through the obligation of an additional $28 million in reprogrammed ESF funds from previous years.
Globally, aquaculture production has grown significantly over the past 50 years, from less than 1.1 million tons around 1950 to about 65.5 million tons in 2004. A majority of global aquaculture fish and shellfish are raised in a freshwater environment and species raised in a marine environment make up about 36 percent of aquaculture production. Marine aquaculture is dominated by high-value fish, such as salmon. Many countries are producing marine fish, though a NOAA official indicated that most production is occurring in shallow, sheltered areas relatively close to shore. A few countries, such as Ireland, have expressed interest in or are developing policy frameworks to regulate offshore aquaculture in the open ocean. To date, however, a NOAA official said that no countries have substantial offshore aquaculture industries with facilities sited in open- ocean environments. The United States’ aquaculture industry includes both onshore and nearshore operations and produces both fish, such as salmon and catfish, and shellfish, such as oysters. Onshore aquaculture facilities are primarily involved in raising freshwater species, such as catfish. Marine aquaculture facilities in the United States are generally located in waters close to shore and in sheltered conditions, and they most frequently raise oysters, mussels, clams, and salmon. The salmon aquaculture industry in the United States is concentrated in Maine and Washington, although the industry is relatively small compared with the global salmon aquaculture industry, accounting for less than 1 percent of the world’s production. During the last 10 years, four small-scale aquaculture facilities began nearshore open-ocean operations in Hawaii, Puerto Rico, and New Hampshire, in conditions similar to those found offshore. All four facilities grow fish species native to their regions, such as moi and kahala in Hawaii, cobia in Puerto Rico, and cod and halibut in New Hampshire. The New Hampshire project also grows mussels. These open-ocean facilities and similar facilities that may be established in an offshore environment require technology that differs from what is generally needed by nearshore facilities. For example, open-ocean facilities need stronger cages and anchors that can withstand the strong currents and storms that are prevalent offshore. Furthermore, offshore aquaculture will face challenges such as inclement weather, which may prevent offshore aquaculturists from accessing cages due to their location far from shore and could delay essential activities such as feeding. However, there are concerns that offshore aquaculture may have adverse environmental impacts. Specifically, excess nutrients or chemicals from fish food, medication, and fish waste may alter water quality and may also change the composition of the benthic community. Although the environmental impact of an offshore aquaculture industry is uncertain because of a lack of data specific to large-scale, offshore aquaculture operations, data from existing small-scale, open-ocean facilities in state waters provide some information about this kind of impact. Studies of one small-scale commercial facility in Hawaii show that some water quality changes occurred near the aquaculture cages, but that these changes were within the allowable limits of the facility’s National Pollutant Discharge Elimination System (NPDES) permit. Also, the data from the site indicated a slight change in the benthic community, but researchers noted that it returned to its original composition after the cages were not used for 6 months. Studies of other open-ocean sites in state or territorial waters found little to no impact on water quality or the benthic community. Multiple federal agencies, including NOAA, the Corps, EPA, and USDA, have regulatory authorities relevant to various aspects of offshore aquaculture operations (see table 1). In addition to the responsibilities described in table 1, NOAA’s Aquaculture Program coordinates the agency’s aquaculture research activities and conducts outreach and industry development efforts, such as sponsoring the 2007 National Marine Aquaculture Summit. Similarly, USDA also chairs the interagency Joint Subcommittee on Aquaculture which, among other things, is creating a federal plan for managing aquatic animal health and has convened a science and technology task force to update the federal strategic plan for aquaculture research. In addition to agency-specific responsibilities and authorities, all federal agencies are required to comply with the National Environmental Policy Act (NEPA). Under NEPA, agencies evaluate the likely environmental effects of projects that could significantly affect the environment. For example, permits for aquaculture facilities or oil platforms might necessitate such a review. An agency may also elect to prepare a programmatic environmental impact statement (PEIS). A PEIS could either be prepared to help develop regulations for an industry by evaluating its potential for environmental, social, and economic impacts or to evaluate proposed actions sharing geographic and programmatic similarities after regulations have been established, such as siting a number of aquaculture facilities in the same general location that plan to raise the same species. If an offshore aquaculture industry develops, a variety of individuals and organizations will have a stake in how the industry is regulated and how it affects the environment. Specifically, federal agencies would be stakeholders because they would regulate the offshore aquaculture industry, or guide and fund public research on offshore aquaculture. Coastal states would be stakeholders because an offshore aquaculture industry could potentially have impacts on natural resources in their state waters and provide economic benefits to coastal communities. The commercial fishing industry would be a stakeholder both because it may have to share ocean space with aquaculturists, and the offshore aquaculture industry could affect the environment that supports wild fish populations. The aquaculture industry would be a stakeholder because it is interested in developing offshore facilities. Environmental groups would be stakeholders because they are interested in protecting marine resources, and the offshore aquaculture industry could affect those resources. Finally, researchers would be stakeholders because they are technical experts and want to ensure proper application of scientific knowledge. Over the last 5 years, four key studies have been conducted with stakeholder input that examined, among other things, potential regulatory frameworks for offshore aquaculture. These four key studies are as follows: The Marine Aquaculture Task Force study was developed by a group of scientists, legal scholars, aquaculturists, and policy experts who sought to gather information about aquaculture and its positive and negative effects. The Marine Aquaculture Task Force’s approach to gathering such information included meeting with aquaculturists, marine scientists, fishermen, public officials, and others in regional meetings in the states of Alaska, Florida, Hawaii, Massachusetts, and Washington. The University of Delaware study was prepared by an interdisciplinary team with backgrounds in marine policy, law, industry, state government, environmental protection, and marine science. This study made recommendations for developing a comprehensive regulatory framework for sustainable offshore aquaculture in the United States based on information from literature reviews and consultations with stakeholders through national and regional workshops throughout the United States. The Pew Oceans Commission study was developed by a bipartisan, independent group to identify policies and practices necessary to restore and protect living marine resources in U.S. waters and the ocean and coastal habitats on which they depend. The Pew Commission brought together a diverse group of American leaders from the worlds of science, fishing, conservation, government, education, business, and philanthropy. The Pew Commission conducted a national dialogue on ocean issues by convening a series of 15 regional meetings, public hearings, and workshops to listen to those who live and work along the coasts. The U.S. Commission on Ocean Policy study, which was required by the Oceans Act of 2000, established findings and developed recommendations for a coordinated and comprehensive national ocean policy. The U.S. Commission had 16 members drawn from diverse backgrounds, including individuals nominated by the leadership in the United States Senate and House of Representatives. The U.S. Commission held 16 public meetings around the country and conducted 18 regional site visits, receiving testimony from hundreds of people. The study includes detailed recommendations for reform of oceans policy. A wide array of issues within four key areas—program administration, permitting and site selection, environmental management, and research— are important to consider when developing an offshore aquaculture program for the United States. Specifically, identifying a lead federal agency, as well as the roles and responsibilities of other federal agencies and states, are key to the administration of an offshore aquaculture program. In addition, permits or leases are important to establish the terms and conditions for offshore aquaculture operations. Site selection is also an important component of regulating offshore aquaculture. Moreover, reviewing environmental impacts of, and monitoring environmental conditions at, offshore aquaculture facilities are key to identifying the scope and nature of potential environmental issues that may require mitigation. Finally, it is important that a regulatory framework include research to address gaps in current knowledge on a variety of issues related to offshore aquaculture. Stakeholders whom we contacted generally agreed on how to address some specific issues within each of the four key areas but differed on many other issues. Aquaculture stakeholders that we contacted and key studies that we reviewed identified specific roles and responsibilities for federal agencies, states, and regional fishery management councils. Specifically, most stakeholders and all four studies we reviewed agreed that NOAA should be the lead federal agency for offshore aquaculture and emphasized that coordination with other federal agencies will be important. Moreover, the majority of stakeholders we contacted said NOAA should be the lead agency for research on offshore aquaculture, although stakeholders were evenly divided about whether NOAA or USDA should be responsible for promoting or supporting the offshore aquaculture industry. In addition, stakeholders and three of the key studies we reviewed recommended that states be involved in the development and implementation of a regulatory framework for offshore aquaculture. Stakeholders told us that states should have the ability to opt out of the offshore aquaculture program, but that those states that have chosen to participate should not have the ability to veto individual offshore aquaculture facility proposals. Finally, stakeholders generally supported regional fishery management councils having the opportunity to comment on individual offshore aquaculture facility proposals but did not support councils having other authorities, such as veto authority, over individual proposals. Most stakeholders that we contacted and the four key studies that we reviewed agreed that NOAA should be the lead federal agency for offshore aquaculture, both to manage a new permitting or leasing program for aquaculture in federal waters and to coordinate federal responsibilities for offshore aquaculture. About half of the stakeholders said they supported NOAA as the lead offshore aquaculture agency because of its experience managing ocean resources. One study, conducted by the University of Delaware, also stated that NOAA was the best choice for a lead agency because of its extensive expertise and knowledge of marine science and policy. However, a few stakeholders we spoke with who did not agree that NOAA should be the lead agency said that other agencies, such as USDA or the Corps, would be better equipped to serve as the lead agency. Two of the stakeholders who supported USDA explained that since aquaculture is ultimately an agricultural activity, USDA would be best able to effectively regulate the industry and coordinate with other agencies. One stakeholder, who supported the Corps as the lead agency, said that since the Corps is currently the de facto lead federal agency for aquaculture permitting in state waters, the Corps should also assume that role for offshore aquaculture in federal waters. Most stakeholders, and the University of Delaware study, stated that it was important for NOAA to develop formal agreements, such as regulations or memorandums of understanding, with other federal agencies to define the responsibilities, authorities, and procedures for regulating offshore aquaculture. Some stakeholders also suggested that close coordination with agencies will allow NOAA to draw on each agency’s expertise when developing regulations or making permitting decisions. For instance, one stakeholder said that EPA has expertise in protecting marine water quality in state waters, and the offshore aquaculture program could draw on that experience to protect water quality in federal waters. Another stakeholder suggested that since aquaculture is a food production business, close coordination with USDA could draw on USDA’s experience in developing food production industries. The administration’s 2007 legislative proposal for offshore aquaculture requires that the Department of Commerce consult with other federal agencies, as appropriate, while developing regulations for an offshore aquaculture program. Despite strong support for NOAA as the lead agency for offshore aquaculture, stakeholders were about evenly divided on whether those responsibilities should be assigned to a new NOAA office or an existing NOAA office. One stakeholder who supported creating a new office in NOAA said that existing offices currently focus on the conservation of marine resources and that aquaculture is a fundamentally different enterprise meriting a separate office that can focus on developing the aquaculture industry. The studies conducted by the University of Delaware and the U.S. Commission on Ocean Policy also suggested that a new office be created to manage the offshore aquaculture program. Of the stakeholders who said that an existing office should manage the offshore aquaculture program, a few mentioned that this would keep NOAA small and streamlined. A majority of stakeholders also said that NOAA should be responsible for managing federal research related to offshore aquaculture, including funding marine aquaculture research and the development of offshore aquaculture technologies. A few stakeholders emphasized that NOAA should coordinate on both research and technology development with other agencies, particularly USDA. Stakeholders who did not support NOAA as the lead agency for technology development generally supported USDA or said that the federal government should not support technology development at all. One stakeholder supported USDA because he said it has a superior record in developing aquaculture technology for both freshwater and marine aquaculture. Another stakeholder emphasized that he did not support government funding for offshore aquaculture technology development because funding should come from the aquaculture industry, particularly for any technologies needed to comply with environmental regulations. Stakeholders were also about evenly divided on whether NOAA or USDA should be responsible for promoting and supporting the offshore aquaculture industry, though a few stakeholders did not think this was a role for the federal government. One stakeholder who said that NOAA should promote the offshore aquaculture industry suggested that NOAA should restructure its mission to support not just offshore aquaculture but the production of sustainable seafood from wild fisheries, as well as offshore aquaculture. Another stakeholder said that USDA is the logical choice to promote and support the offshore aquaculture industry because it has experience marketing agricultural products. In contrast, a few stakeholders said that promotion or support of the offshore aquaculture industry is not a role for the federal government. One stakeholder objected to government promotion of offshore aquaculture because it amounts to the government promoting one industry over another, for instance, promoting offshore aquaculture at the expense of other types of aquaculture, such as nearshore shellfish aquaculture. Finally, stakeholders expressed concern over having one agency, such as NOAA, be responsible for both regulating and promoting the offshore aquaculture industry because of the potential conflict of interest between those two responsibilities. One stakeholder suggested that NOAA regulate the industry and develop offshore aquaculture technologies and that USDA focus on promoting offshore aquaculture. In this context, at the state level, Maine, Hawaii, and Washington have each separated their regulatory and promotion agencies. Despite Hawaii’s and Maine’s separation of these responsibilities, officials from both states said that agencies have the ability to balance these competing responsibilities. In fact, one state official in Hawaii stated that keeping promotion and regulatory responsibilities together can allow officials to share expertise, thereby increasing efficiency and resulting in cost savings. A NOAA official said that NOAA’s mission is to enable marine aquaculture, with appropriate environmental safeguards, and that the agency has consistently balanced its missions of enabling and regulating other industries. Three of the key studies we reviewed recommended that states be involved in the development and implementation of a regulatory framework for offshore aquaculture. For instance, the U.S. Commission on Ocean Policy recommended that any proposed federal permitting and leasing program be coordinated with aquaculture-related regulations developed at the state level to provide regulatory consistency to the industry and manage potential environmental impacts that cross jurisdictional lines, such as the spread of disease. The administration’s 2007 legislative proposal for offshore aquaculture requires coordination with coastal states during the process of establishing regulations for offshore aquaculture. In addition, a majority of stakeholders supported a policy that would allow states to opt out of the offshore aquaculture program. If a state chose to opt out, it would be refusing to allow any offshore aquaculture to take place in the federal waters adjacent to its state waters. Of those who supported an opt-out provision, a majority said that states should be able to opt out of fish aquaculture anywhere in the 200 miles of federal waters directly offshore from their state waters. A few stakeholders stated that the opt-out provision should apply only within a certain distance from shore—ranging from 5 to 12 miles. The administration’s 2007 legislative proposal for offshore aquaculture includes a provision that would allow a state to opt out of offshore aquaculture within 12 miles of its coast. NOAA officials explained that the agency’s decision to limit the opt-out provision to 12 miles was a policy decision that balanced the need to give states a reasonable buffer zone and the difficulty of identifying boundaries between states out to 200 miles in the exclusive economic zone. For example, while it is relatively clear where the boundaries of Alaska’s state line would be when extended out to 200 miles, state boundaries on the New England coast overlap extensively, even relatively close to shore. Stakeholders who supported providing the states the ability to opt out did so for various reasons. A few stakeholders said they supported an opt-out provision because offshore aquaculture could still affect a state’s natural resources. For example, escaped fish could travel into state waters and spawn, potentially interbreeding with wild fish populations in state waters, which could reduce the ability of wild fish to survive. Three stakeholders said that this provision is necessary for political reasons—that without the ability for states to opt out, it would be difficult to garner enough support to enact offshore aquaculture legislation. Stakeholders who opposed the state opt-out provision also listed various reasons. A few stakeholders argued that states should not make decisions about the use of federal resources, and one stakeholder said that allowing states to opt out is contrary to a nationally stated goal of increasing domestic seafood production. Other stakeholders proposed more flexible opt-out policies. For instance, one stakeholder supported a policy that would allow states to selectively opt out of particular locations, rather than opting out of offshore aquaculture entirely. In addition, a few stakeholders mentioned using an “opt-in” policy, in which states would need to declare their support for offshore aquaculture before any facilities could be located in the waters adjacent to their coasts. Regardless of how the opt-out provision is applied, the majority of stakeholders agreed that states that participate in the offshore aquaculture program should not have the ability to veto individual offshore aquaculture projects. One stakeholder was concerned that, if states were allowed to veto individual offshore aquaculture projects, then this would prevent offshore aquaculture development since few businesses would be interested in investing time and money in obtaining federal approvals if a state could ultimately veto a federal decision. A few stakeholders who opposed veto authority for states explained that, since offshore aquaculture would be in waters under federal jurisdiction, states should not be allowed to overrule federal decisions. Stakeholders who supported giving states veto authority said that offshore aquaculture could affect states’ natural resources. For instance, disease could spread from fish in offshore facilities to fish in state waters requiring state and federal regulators to coordinate closely to manage the disease. A few stakeholders, including NOAA, said that states could use the Coastal Zone Management Act—rather than veto authority—to challenge offshore aquaculture proposals. For instance, a state could determine that a proposed offshore aquaculture facility was inconsistent with the state’s coastal zone management plan. According to NOAA officials, a state could only make this determination if the proposed offshore aquaculture facility would clearly violate provisions of the state’s coastal zone management plan. In addition, one stakeholder was concerned that states would not be assured of preventing proposals they objected to, since the Secretary of Commerce has the authority to override states’ objections under certain circumstances. Finally, although the majority of stakeholders did not support veto authority for states participating in the program, most stakeholders said that states should have the opportunity to provide input regarding proposed offshore aquaculture facilities, such as comments on potential environmental impacts or proposed facility locations. Three of the key studies we reviewed also recommended that states have the opportunity to comment on proposed facilities. In particular, the Marine Aquaculture Task Force study said that federal agencies should use states’ comments on proposed facilities to ensure that permits issued for offshore aquaculture are integrated with regional marine planning efforts and do not undermine the effectiveness of ongoing state conservation measures. In its response to our questionnaire, NOAA agreed that adjacent states should have an opportunity to provide comments regarding proposed projects. Finally, stakeholders generally agreed on how regional fishery management councils should be involved in regulating offshore aquaculture. For instance, most stakeholders indicated that councils should have the opportunity to provide comments on proposed offshore aquaculture projects in their regions. Some stakeholders, including NOAA, emphasized that councils should comment on proposed projects to ensure that they will not adversely impact wild fisheries or fish habitat managed by the councils. The University of Delaware and Marine Aquaculture Task Force studies also supported allowing councils to review or comment on offshore aquaculture projects. Representatives from five of the six councils that we spoke with wanted the opportunity to comment on proposed offshore aquaculture projects. Most stakeholders also agreed that councils should not have veto authority for proposed projects within their regions. Some stakeholders did not support a veto for councils because they believed the councils are dominated by wild fishery interests and might veto projects simply to avoid any potential competition in their markets. In contrast, representatives from two councils wanted more direct authorities, such as the ability to approve or deny proposed offshore aquaculture projects. For example, a representative from the Western Pacific council said that councils should have this additional authority because councils are best positioned to address region-specific issues that may not be considered in a nationwide top-down permitting process. Most stakeholders also agreed that offshore aquaculture should not be subject to some of the regulations that are currently used to manage wild fisheries under fishery management plans, including restrictions on season of harvest, size of the fish that may be harvested, and the method that may be used to harvest fish. Because offshore aquaculture is considered fishing under Magnuson-Stevens Fishery Conservation and Management Act, the councils could impose these types of restrictions on offshore aquaculture operations. According to NOAA, many offshore aquaculture tasks, such as stocking cages outside of fishing season and harvesting small fish, would be illegal under current regulations for species managed under fishery management plans. Therefore, the administration’s 2007 legislative proposal for offshore aquaculture would exempt offshore aquaculture facilities from fishing restrictions under current law. The University of Delaware study reached a similar conclusion stating that offshore aquaculture facilities should be exempt from restrictions that apply to wild fisheries. About half of the stakeholders who agreed with this approach told us that offshore aquaculture is a completely different enterprise from fishing and does not result in an increase or decrease of the wild stocks managed by councils. One stakeholder suggested that subjecting offshore aquaculture facilities to catch restrictions for wild fisheries is like limiting poultry production to duck hunting season. Representatives from five of the six councils we interviewed also supported exempting offshore aquaculture facilities from catch restrictions placed on wild fisheries. However, a representative from the South Atlantic council was concerned that it is too soon to enact such an exemption since any escapes from offshore aquaculture facilities could impact wild fisheries. Permits or leases are important to establish the terms and conditions for offshore aquaculture operations, including authorizing aquaculture activities and providing the legal right to occupy an area of the ocean. In addition, developing a process to select appropriate sites was identified as an important component of planning for offshore aquaculture facilities and most stakeholders supported a variety of approaches to approve aquaculture facility locations. Permits or leases are important to establish the terms and conditions for offshore aquaculture operations, including authorizing aquaculture activities and providing legal rights to occupy an area of the ocean. Several existing federal permits—such as EPA’s NPDES permit for water quality and the Corps’ section 10 permit for structures in navigable waters—can regulate specific offshore aquaculture activities, such as the release of pollutants into, or the installation of structures in, U.S. waters. In addition, according to the University of Delaware study and stakeholders we talked to, offshore aquaculturists will need a legal right—through a permit or lease—to occupy a given area of the ocean. Some stakeholders identified this legal right as important for financing offshore aquaculture operations because it would have market value and, therefore, could be sold, or used as collateral on a loan to allow aquaculturists to secure funding for their projects. According to NOAA officials, however, permits are more appropriate than leases for aquaculture operations beyond the territorial sea, which extends 12 miles from the shore. Specifically, NOAA officials stated that, under customary international law, it is well established that the United States has exclusive rights to regulate economic activities, such as fishing and aquaculture, in the U.S. Exclusive Economic Zone, which generally extends from 3 to 200 miles from shore. While this jurisdiction and authority do not include any proprietary rights for waters or submerged lands beyond the territorial sea, NOAA officials stated that other types of permits issued by NOAA have provided the security of tenure—the right to occupy an area of the ocean—necessary for obtaining financing, or selling the permits. However, when questioned on the most appropriate vehicles for authorizing an offshore aquaculture program, the majority of stakeholders told us that an offshore aquaculture program should include both permits and leases. Some stakeholders articulated distinct and important benefits for both permits and leases. For example, a few stakeholders said permits should have shorter time frames to ensure compliance with regulations and best management practices while leases should grant a long-term right to occupy a given area of the ocean to encourage investment. One stakeholder said that investors may be less receptive to permits as a mechanism for assigning the legal right to occupy an area of the ocean because they perceive permits to grant fewer legal rights. However, others stated that either a permit or lease could be used to secure legal rights and, thereby, encourage financial investment. For instance, two stakeholders said that whether one identifies a document as a permit or a lease is unimportant as long as the document provides legal rights to the area. Stakeholders also expressed a range of opinions on the specific types of permits or leases that should be issued. Most stakeholders supported issuing both commercial and research permits or leases. For example, one stakeholder stressed the importance of research permits or leases for further developing a commercially viable offshore aquaculture industry. In addition, many stakeholders supported issuing emergency permits or leases that allow facility relocation in the case of natural events such as hurricanes or red tides, but NOAA did not support this approach. A NOAA official told us that emergency permits or leases are not necessary because offshore aquaculture facilities would be difficult to move and, therefore, aquaculturists would be unlikely to take advantage of such a permit or lease. NOAA officials emphasized, however, that there are other ways, besides emergency permits or leases, of addressing emergencies, such as modifying the terms of an existing permit to allow facilities to relocate. In addition, stakeholders expressed differing opinions about whether to allow short-term permits or leases to allow an aquaculturist to test the feasibility of a proposed offshore aquaculture facility. For example, one stakeholder questioned the utility of short-term permits or leases because the costs associated with offshore aquaculture make it impractical to operate facilities for a short period of time. Two others were concerned that either emergency or short-term permits or leases could be used to circumvent permitting requirements associated with longer term commercial permits or leases. Stakeholders’ opinions also varied on the appropriate length for commercial permits or leases, with some stakeholders supporting time frames of approximately 20 years and others supporting shorter terms such as 10 years. Some stakeholders stressed the need for longer permits or leases to allow time for the operation to become profitable. The states we visited have taken varying approaches on this issue. For example, while Maine issues 10-year leases to facilities in nearshore state waters, a state official recognized that an offshore facility would require a larger investment and, therefore, need a longer term permit or lease to recoup initial investments. Hawaii issued 20-year leases to its two existing nearshore open-ocean aquaculture facilities. Conversely, a state official from Washington supported shorter permit or lease lengths because offshore aquaculture is new and, therefore, the full impacts on the environment are unknown. Similarly, a few stakeholders we spoke with did not support longer terms out of concern that permits or leases would be difficult to revoke midterm in cases of environmental damage or stressed that if permits had longer terms, then regulators should be able to revoke permits early if such damage were to occur. The administration’s 2007 legislative proposal for offshore aquaculture would authorize permits for 20-year terms and includes language allowing the suspension or revocation of a permit. Regardless of their opinions on permit or lease terms, the majority of stakeholders supported public involvement during the permitting or leasing process. Most stakeholders indicated that the public should have the opportunity to both comment for the record and present evidence at public hearings associated with permitting or leasing decisions. Some stakeholders noted that because facilities will be located in public waters, a permitting or leasing process requires transparency and public input. However, a few stakeholders who supported public participation also expressed concern that some public comments and hearing testimony could be misinformed or unnecessarily stall the decision-making process. Based on their experience with this issue, state regulators and others that we spoke to in Hawaii and Maine also supported public involvement. For example, a key regulator, researchers, and aquaculturists involved with existing aquaculture facilities in Hawaii’s state waters identified public involvement as key to a successful and transparent permitting process. In Hawaii, the main permitting process authorizing aquaculture operations requires public hearings as part of its approval process. Both aquaculturists and researchers in Hawaii said that the public involvement process ultimately decreases opposition to proposals because applicants can modify their plans in response to public comments or alleviate public concerns by providing more comprehensive information about the proposal. For example, one aquaculturist adjusted the site and specifications of his operation in response to requests made during a public hearing. Similarly, state regulators in Maine also stressed the importance of public involvement in their states’ permitting and leasing approval process. Maine requires a public scoping meeting before an aquaculturist may submit an aquaculture application. Officials have found this early dialogue between the aquaculturists and the public useful in resolving concerns while the details of the proposed facility are still under development. Developing a process to approve aquaculture facility locations is an important component of regulating offshore aquaculture according to federal regulators, environmentalists, and researchers. For instance, NOAA officials in Hawaii emphasized that siting aquaculture facilities away from areas known to have high concentrations of marine mammals could reduce the likelihood that aquaculture facilities would adversely affect these animals. In Maine, some environmental groups also advocated siting aquaculture facilities outside known fish migration corridors to reduce the interactions between aquaculture-raised and wild fish, thereby reducing the likelihood that disease will be passed from aquaculture-raised to wild populations. Although the majority of stakeholders we contacted supported a variety of approaches that federal regulatory agencies could use to approve aquaculture facility sites, there was a lack of consensus on any one approach. These approaches include (1) determining whether a site is appropriate on a case-by-case basis, (2) prepermitting locations by approving sites independently of and prior to submitting individual facility applications, (3) zoning ocean areas to identify both appropriate areas for offshore aquaculture and prohibited areas, and (4) developing aquaculture parks containing multiple facilities in areas that are unlikely to result in conflicts between aquaculture facilities and other ocean uses and have optimum access to land-based aquaculture services. Those stakeholders who supported using a case-by-case site selection strategy agreed that regulators should assess the appropriateness of a specific site. One stakeholder who supported the approach stated that aquaculturists are most likely to know which locations best fit their planned operations and type of species and, therefore, should be the ones to propose aquaculture facility site locations. Two other stakeholders noted that this approach is advantageous during the early stages of offshore aquaculture development because it requires only knowledge about proposed facility sites rather than a wide variety of potential sites. However, a few stakeholders also criticized the case-by-case approach, saying that it could create additional costs for applicants or lengthen the permitting process. In addition, according to a few stakeholders, this approach would create a less standardized process for approving facilities than other approaches would. Another stakeholder expressed concern that the case-by-case approach would not allow regulators to collectively assess the cumulative impacts of several sites located near one another because they would be assessed individually. Currently at the state level, Hawaii, Maine, and Washington all use the case-by-case approach for approving sites within their state waters. For example, in Hawaii, regulators consider the impacts of a proposed site on marine mammals and ocean users, such as native Hawaiian fishermen, among other things, when deciding whether to approve a facility site. Those stakeholders who supported a prepermitting site selection strategy agreed that regulators should assess the suitability of a location for aquaculture before, and independently of, individual aquaculture applications. In this context, the University of Delaware study describes prepermitting as the process of establishing appropriate areas for offshore aquaculture by conducting environmental assessments of potential sites; creating a master plan for siting in the area; determining which aquaculture techniques and projects are appropriate for that area; creating a general permit authorizing use of the area, approved by other regulatory agencies; and, ultimately, issuing individual permits for occupying the area. A few stakeholders told us that prepermitting would make site approval more predictable and consistent, and another said that it would allow for cumulative environmental review of multiple projects. However, certain stakeholders who supported a prepermitted approach noted that establishing such a system will be time consuming and, therefore, not feasible in the short term. A few stakeholders were opposed to using prepermitted site selection. Two of these stakeholders questioned the appropriateness of making regulatory agencies responsible for selecting facility locations, stating that this approach may not identify the most viable sites. Furthermore, a stakeholder who did support prepermitting still noted that permit holders may unreasonably expect a prepermitted location to produce high yields and blame regulators if this does not occur. Those stakeholders who supported a zoning approach to site selection agreed that regulators should use a process in which government agencies would designate allowable uses—both aquaculture-related and others— for various ocean areas. However, stakeholders expressed many of the same concerns about a zoning approach as they did about a prepermitting approach. For example, a few stakeholders were wary of allowing regulatory agencies to select sites that may ultimately be unsuccessful. Among these stakeholders was a state regulator in Florida, a state which initially created aquaculture zones in their state waters but later shifted to a case-by-case site-selection approach because it allowed them to better identify appropriate sites for specific aquaculture operations. While a few stakeholders considered aspects of zoning and prepermitting approaches to be similar, others distinguished zoning as being a more far-reaching approach than prepermitting. Similarly, a few stakeholders supported zoning as a method to systematically manage the ocean ecosystem and identify appropriate sites. Alternatively, two stakeholders expressed concerns about the technical feasibility of zoning the ocean because the process would be too time consuming due to the extensive information needed about appropriate uses for broad areas of the ocean. In addition, Hawaii state officials responsible for developing Hawaii’s aquaculture industry expressed concerns about zoning. They said that the extensive work necessary for zoning federal waters would unnecessarily delay offshore aquaculture development. Stakeholders we contacted were less supportive of establishing aquaculture parks compared with the other approaches to site selection. According to the University of Delaware study, aquaculture parks could be designed to provide adequate space for aquaculture operations in an area environmentally suited to the operations, with minimal user conflicts and access to land and coastal services. Aquaculture parks could be managed by a private-sector entity, a government agency, or a public-private partnership. Like the prepermitting site selection approach, a few proponents of aquaculture parks said the approach made the permitting process more predictable, while another stakeholder was concerned that this approach involved regulators too heavily in the site selection process. In addition, stakeholders identified issues unique to aquaculture parks. One stakeholder said that parks could allow greater business efficiencies by consolidating necessary aquaculture infrastructure and supplies like dock facilities and fuel into one area, but others were concerned that offshore aquaculture facilities would be located too close to one another. They asserted that concentrating offshore aquaculture facilities within the confines of aquaculture parks would not be in the best interest of aquaculturists and could also lead to increased environmental impacts. Most stakeholders we contacted supported an environmental review of the potential impacts of offshore aquaculture facilities before any facilities are sited, which can help agencies approve facilities in areas less likely to suffer ecological harm. In addition, stakeholders generally supported monitoring environmental conditions at offshore aquaculture facilities once they begin operations. Most stakeholders supported an adaptive approach to monitoring that would alter monitoring requirements over time to focus on the measures demonstrated to be the most appropriate for tracking changes to the environment. Stakeholders also generally supported conducting regular inspections of offshore aquaculture facilities. However, stakeholders did not always agree on how to mitigate the potential environmental impacts of escaped aquaculture-raised fish, including restrictions on the types of fish that could be raised in offshore cages, whether fish should be marked or tagged, and whether facilities should be required to develop plans outlining how they would respond to fish escapes. Most stakeholders we contacted generally supported an environmental review prior to offshore aquaculture facilities’ beginning operations to ensure that these facilities are established in areas less likely to suffer ecological harm. For instance, a majority of stakeholders recognized the value of reviewing the potential environmental impacts of offshore aquaculture over a broad ocean area before any aquaculture facilities are sited—which involves preparing a PEIS. But these stakeholders also articulated different views on the goal of a PEIS for offshore aquaculture. While some stakeholders emphasized that a PEIS should examine the potential environmental impacts of an offshore aquaculture industry, other stakeholders noted that a PEIS would be most useful if it reduced the need for facility-specific environmental reviews. While the administration’s 2007 legislative proposal requires NOAA to conduct a PEIS, it does not specify exactly what the PEIS should include. In this context, in 2006, California enacted a law to allow fish aquaculture facilities in state marine waters, which requires the state to conduct a review similar to a PEIS. The law requires the review to consider, at a minimum, 10 factors, such as: appropriate areas for siting aquaculture facilities; the effects of aquaculture on ocean and coastal habitats, marine ecosystems, and commercial and recreational fishing; and the potential environmental impacts of escaped fish, medications, and the use of fish meal and fish oil. A few stakeholders said that it is not important for the federal government to conduct a PEIS for offshore aquaculture. Two of these stakeholders stated that a PEIS would require a significant amount of data and would take a very long time, unnecessarily delaying the development of offshore aquaculture. While a few stakeholders considered the broad level of review in a PEIS to be sufficient, about half of the stakeholders we contacted suggested that a facility-specific environmental review, conducted in accordance with NEPA, should also be required. About half of the stakeholders who supported the facility-specific review said that such reviews could examine site-specific or facility-specific issues that cannot be addressed in a broader PEIS. In its response to our questionnaire, NOAA indicated that a facility-specific review is very important and stated that the complexity of this type of review should reflect the risk level of the project. For instance, a review of a project that uses technologies, species, and sites that are well understood could draw on existing documentation, while a proposal for a project that uses a new species or untested technology may require a more in-depth review. Of the few stakeholders who supported only the PEIS, two stakeholders said that if the PEIS was done correctly, a facility-specific review should not be necessary. One stakeholder mentioned that requiring a facility-specific review for each proposed offshore aquaculture facility would be expensive for aquaculturists and would be a barrier to offshore aquaculture development. With regard to the states’ approaches for addressing environmental reviews, we found that Maine and Hawaii both require facility-specific environmental reviews for proposed aquaculture facilities in their state waters. Maine requires that applicants collect environmental baseline data on sediment characteristics; the benthic community; water quality; and existing uses of the site, such as commercial fishing and recreational boating. Once an application is submitted, the state also conducts a site review, which can include conducting video surveys of the area and gathering water quality information. Hawaii requires a similar level of detail from its applicants through an environmental assessment process. Aquaculture industry representatives and state regulators in Hawaii both told us that they supported Hawaii’s process. Most stakeholders also stated that considering the potential cumulative impacts of aquaculture facilities is important when evaluating offshore aquaculture proposals. Two stakeholders suggested that cumulative impacts be considered as part of the PEIS process. The University of Delaware and Marine Aquaculture Task Force studies both recommended that agencies consider cumulative impacts of offshore aquaculture facilities during environmental reviews. The administration’s 2007 legislative proposal includes language requiring that a permitting process address the potential cumulative impacts of offshore aquaculture on marine ecosystems, human health and safety, other ocean uses, and coastal communities. In addition, many stakeholders offered suggestions for mitigating cumulative impacts, including siting facilities far enough apart that their operations will be less likely to affect one another, combining multiple kinds of aquaculture—such as fish and shellfish—to take advantage of shellfish’s ability to remove nutrients from the water column, and limiting the number of fish within a given cage or area. An industry representative also pointed out that it is in the best interest of aquaculturists to locate their facilities far from one another to avoid being affected by potential water quality or disease problems from neighboring facilities. Stakeholders generally supported monitoring a variety of potential environmental impacts of offshore aquaculture facilities once they have been approved and are operating, though they varied on the types of monitoring they supported for fish and shellfish aquaculture facilities. While most stakeholders said it is important to monitor both fish and shellfish aquaculture facilities for impacts on the benthic community and disease outbreaks, stakeholders said it is more important to monitor fish aquaculture facilities than shellfish aquaculture facilities for chemical levels in the water. In addition, some stakeholders mentioned that monitoring fish aquaculture facilities for escapes will be very important. Maine and Washington have developed monitoring programs for their nearshore aquaculture facilities, which provide examples of how the federal government could implement the types of monitoring recommended by stakeholders for offshore aquaculture facilities. Specifically, we found that these states have developed monitoring programs—although they vary significantly between states—to address benthic community, disease, and chemical impacts for nearshore fish aquaculture facilities. For example, Maine’s general NPDES permit for salmon aquaculture facilities requires multiple kinds of benthic community monitoring, including color video or photographic evaluations of the ocean floor under and around each net pen twice per year and a detailed analysis of samples of benthic community organisms at least once every 5 years. In contrast, Washington requires video evaluations under net pen facilities twice every 5 years but requires detailed analysis of samples of benthic community organisms only if routine video evaluation results show that the facility samples exceed the permit requirements. Maine and Washington also both have regulations to control disease outbreaks in fish aquaculture facilities. Both states require that an aquaculturist whose fish test positive for certain diseases notify the state within 48 hours. Maine and Washington can require a number of mitigation measures—depending on the severity of the outbreak and the potential for the disease to impact other aquaculture-raised or wild fish—including requiring that the infected fish be quarantined, removed, or destroyed. Finally, if aquaculturists use medications to treat disease, Maine requires them to monitor the concentration of those medications in benthic sediments. Washington requires aquaculturists to monitor for antibiotics in benthic sediments if antibiotic use could pose a threat to human health or the environment. Although monitoring was identified as important by stakeholders, state regulators in Hawaii identified some challenges to monitoring the nearshore, open-ocean aquaculture facilities in Hawaii state waters. Specifically, Hawaii state regulators said they do not have the data to determine whether medications used to treat fish for disease could affect the marine environment. These officials suggested that EPA could help the states evaluate these impacts by developing standardized laboratory tests that could detect medications in the marine environment, as well as by developing protocols for monitoring such medications. Another monitoring challenge, according to aquaculturists in Hawaii, is that some types of monitoring, such as collecting sediment samples beneath the cages for benthic community analysis, are very difficult to conduct in open-ocean conditions. Diving for these samples in deep water is dangerous and, as a result, aquaculturists find it difficult to obtain insurance coverage for deep water diving. In addition to supporting specific types of environmental monitoring for fish and shellfish facilities, most stakeholders also supported using an adaptive monitoring approach that would allow regulators to change monitoring requirements over time to focus only on the types of monitoring demonstrated to be the most appropriate for tracking changes to the environment. Some stakeholders said that an adaptive monitoring approach would provide regulators the flexibility to respond to new information on environmental risks and change monitoring requirements accordingly. Others mentioned that, since offshore aquaculture is a new industry, it is difficult to predict the impacts and the monitoring measures needed beforehand, and so the flexibility of adaptive monitoring would be appropriate. The University of Delaware study also recommended that monitoring requirements and regulations be flexible and adaptive to allow regulators to modify these requirements as warranted by changes in environmental conditions. Officials in Maine also supported adaptive monitoring and suggested that regulators need flexibility to adjust monitoring requirements to ensure that resources are focused on monitoring the most important measures. Finally, most stakeholders wanted federal agencies to require inspections for the security of structures and equipment at the aquaculture site, as well as for compliance with the terms and conditions of permits, among other things. The University of Delaware study stated that regulators should conduct both announced and unannounced inspections. For instance, announced inspections could be conducted to oversee chemical treatments of fish or obtain water samples from the cages. Unannounced inspections could be useful if the permitting agency suspects that the operator is not meeting permitting conditions. Stakeholders had varied opinions about other policies related to offshore aquaculture that could be used to mitigate the potential environmental impact of escaped aquaculture-raised fish, including restricting the types of fish that could be raised in offshore cages, requiring fish to be marked or tagged, and requiring facilities to develop plans outlining how they would respond to fish escapes. Specifically, a majority of the stakeholders supported a policy that would limit offshore aquaculture to species native to the region in which the facility is located. The administration’s 2007 legislative proposal includes language to require that offshore aquaculture facilities raise only species that are native to the aquaculture facility’s geographic region unless a scientific analysis shows that the harm to the marine environment is negligible or can be mitigated. A similar approach is currently being used by Maine, in which a proposal to raise nonnative species that have never been cultured in Maine must be presented at a public hearing in addition to the regular environmental review process. By contrast, in California, an official told us that the state prohibits aquaculturists from raising nonnative species. About half of the stakeholders we spoke to also supported a policy that would prohibit raising genetically modified species offshore. The administration’s 2007 legislative proposal includes language to require that offshore aquaculture facilities not raise genetically modified species unless a scientific analysis shows that the harm to the marine environment is negligible or can be mitigated. One stakeholder said he opposed a prohibition on genetically modified species because it could reduce the competitiveness of U.S. industry by preventing U.S. companies from raising species that may become economically important. Stakeholders also had varied views on a policy that would require aquaculturists to mark or tag their fish to distinguish them from wild fish. The majority of stakeholders we spoke with supported this policy, often citing the need to hold aquaculture producers accountable for fish escapes. In addition, a few stakeholders said that marking or tagging fish would also allow researchers to gather additional information about the impacts that escaped fish have on wild populations. Three of the six regional fishery management council representatives we spoke with said that marking or tagging aquaculture-raised fish was a good idea. The council representatives were generally concerned with how aquaculture- raised fish would complicate their efforts to enforce wild fisheries regulations. For instance, council representatives said that if aquaculture- raised fish are indistinguishable from wild fish, then this increases the potential for illegally caught wild fish to be passed off as aquaculture- raised fish, undermining wild fisheries enforcement. One NOAA official and a representative of the Gulf of Mexico council, however, suggested that a tracking system with a paper trail to follow aquaculture-raised fish from offshore cages to the marketplace could alleviate some of the concerns raised by stakeholders. Most stakeholders who opposed marking or tagging of aquaculture-raised fish did so because they said that this practice is expensive. A NOAA official opposed requiring marking or tagging for each offshore aquaculture facility, but noted that if there is a scientific basis for it because of a high risk of environmental harm from escapes from a particular aquaculture facility, the agency would support marking or tagging for that facility. States have developed marking requirements for fish raised in nearshore aquaculture facilities that provide examples of how the federal government could implement marking requirements for fish raised offshore. Maine and Washington currently require aquaculture-raised salmon in their marine waters to be marked so as to be distinguishable from wild populations. For instance, one environmentalist in Maine explained that wild Atlantic salmon—an endangered species—are highly adapted to their environments, including the particular river in which they were hatched. As a result, interbreeding with aquaculture-raised salmon could change the genetics of the wild population and reduce the ability of wild Atlantic salmon to survive. In Washington, the marking requirement stems from a desire to identify aquaculture-raised Atlantic salmon found spawning in state rivers. British Columbia also has an Atlantic salmon aquaculture industry. Marking aquaculture-raised fish from Washington can clarify whether fish are escaping from U.S. aquaculture facilities or from Canadian ones. Aquaculturists raising fish in Hawaii’s open-ocean state waters told us that the state does not require them to mark or tag their fish. Most stakeholders also supported requiring aquaculturists to develop plans to address fish escapes from their proposed offshore aquaculture facilities. NOAA indicated that requiring aquaculturists to submit escape response plans is very important. The administration’s 2007 legislative proposal states that environmental requirements must include safeguards to prevent fish escapes that may cause significant environmental harm. Most stakeholders also agreed that aquaculturists should be required to develop emergency response plans in the event that aquaculture operations need to be temporarily relocated. The University of Delaware study also supported the development of such plans, which they believe could also help aquaculturists relocate their facilities in an emergency, such as if a red tide or large storm system threatened the aquaculture- raised fish. Most stakeholders also supported a requirement that aquaculturists provide a financial guarantee, such as a bond, letter of credit, insurance policy, or trust fund, to cover the cost of removing abandoned aquaculture facilities. For example, two stakeholders supported this policy because, in the event that the aquaculturist goes bankrupt, the guarantee prevents the government from having to pay to remove the facility. Both Maine and Hawaii use a similar approach for aquaculture in their state waters by requiring companies to obtain bonds for removing aquaculture facilities when aquaculture operations cease. In its response to our questionnaire, NOAA indicated that it supports requiring this type of financial guarantee. Stakeholder views varied, however, about whether a similar financial guarantee should be required to remediate environmental damage caused by an offshore aquaculture operation, with about half of the stakeholders supporting such a requirement as a necessary and logical accountability provision. A few stakeholders stated that without a financial guarantee, any damage caused by a facility would require public funds for remediation. Other stakeholders objected to requiring a financial guarantee for remediating environmental damage. Some stakeholders cited a variety of concerns with bonds for environmental remediation, such as (1) difficulty proving that the environmental damage was caused by a particular facility, (2) difficulty quantifying the damage, and (3) that the cost of providing such a guarantee, particularly if there are no numerical limits on the total environmental damages that could be claimed, might hinder offshore aquaculture industry development. One NOAA official said that requiring a financial guarantee for mitigation of the benthic habitat in the immediate vicinity of the aquaculture site is practical but did not agree with a requirement for mitigation of all other environmental damage. To address the issue of financial guarantees to cover environmental damage from aquaculture facilities in state regulated waters, California recently enacted a marine aquaculture law, which includes a provision requiring a financial guarantee from companies to cover environmental damage, but specifies that the extent of environmental damage and related costs will be determined by the state Fish and Game Commission. An environmentalist involved in the negotiations surrounding the law explained that identifying a specific entity—the state Fish and Game Commission—to determine the extent of environmental damage was a compromise acceptable to both the aquaculture industry and environmental groups. Specifically, he said that environmentalists supported the compromise because it holds aquaculture facilities accountable for environmental damage, while industry supported it because it is confident that the Fish and Game Commission will deal with environmental damage issues fairly. About half of the stakeholders that we contacted said that they would support a similar provision at the federal level. Two stakeholders suggested that NOAA could make determinations about the extent of environmental damage at the federal level since it has experience assessing impacts on the marine environment. One stakeholder who did not support a federal government system similar to California’s feared that the criteria for identifying environmental damage could change from year to year, thereby increasing the risk of investing in offshore aquaculture. It is also important for a regulatory framework to include federally funded research to address gaps in current knowledge on a variety of issues related to offshore aquaculture. Stakeholders identified four research areas as particularly appropriate for federal funding—the development of alternative fish feeds; the development of best management practices; the investigation of how escaped aquaculture-raised fish might impact wild fish populations; and the development of hatchery technologies to breed and grow fish, while effectively managing disease. In addition, while NOAA and USDA fund research on marine aquaculture through, for instance, competitive grants, some researchers said that these grants are funded over time periods that are too short to accommodate certain types of research. Stakeholders we contacted and the four key studies we reviewed generally agreed that the federal government should fund aquaculture research to address gaps in current knowledge. Stakeholders identified four research areas as particularly appropriate for federal funding. These four research areas are as follows: Most stakeholders supported research to help in the development of alternative fish feeds, citing reasons such as protecting wild species from overfishing because wild species are currently used as a source of fish meal and fish oil, and helping to lower industry costs. For example, a NOAA official noted that the demand for fish feed has increased in recent years, leading to a steep rise in the price of aquaculture fish feeds. Due to this price increase, industry representatives and researchers are interested in developing alternative feeds that cost less. Most stakeholders also supported federal research that would help develop best management practices. For example, one stakeholder said that best management practices are very important because they identify accepted practices for aquaculturists to follow and provide a method for agencies to judge whether aquaculture facilities are operating appropriately. Most stakeholders supported federally funded research investigating how escaped aquaculture-raised fish might impact wild fish populations. One stakeholder supported this research because existing research on escapes does not focus on the species likely to be raised offshore. Many stakeholders also supported federal research that would help develop hatchery technologies to breed and grow the fish that ultimately populate offshore cages, while effectively managing disease. Aquaculturists have identified the hatchery stage of aquaculture as particularly difficult because hatchery fish are susceptible to diseases, young fish need specially formulated feeds, and breeding fish is complex. While stakeholders generally identified these areas as priorities, a few stakeholders also emphasized that federal funding should focus on research that helps regulate the aquaculture industry or mitigate environmental impacts. Research into how escaped aquaculture-raised fish might impact wild fish populations is an example of this type of research. Other stakeholders, as well as the U.S. Ocean Commission study, suggested that federal research should also assist aquaculture industry development. For instance, one stakeholder suggested that the top issue for government funding should be determining which species will be commercially viable for offshore aquaculture. Similarly, the stakeholder noted that developing a species for aquaculture is difficult for the private sector to do because it is very expensive and would take 10 to 30 years. NOAA and USDA currently support research on marine aquaculture through, for example, competitive grants. NOAA’s major competitive grant program for marine aquaculture is the National Marine Aquaculture Initiative, which funded approximately $4.6 million in projects related to marine species during the 2006 grant cycle. NOAA also manages funding for a number of offshore aquaculture-related projects, such as the open- ocean aquaculture demonstration project off the coast of New Hampshire. Similarly, USDA’s Cooperative State Research, Education, and Extension Service funds external aquaculture research through such vehicles as competitive grant programs, land grant institutions, and regional aquaculture centers. In addition, USDA’s Agricultural Research Service conducts research at its federal science centers and laboratories. Several researchers, including some whom we interviewed during our site visits, identified potential limitations of the current federal aquaculture research programs. Specifically, they said that many of the available competitive grants are funded over time periods that are too short and at funding levels too low to accommodate certain types of research. For example, researchers in Hawaii said that the development of healthy breeding fish to supply offshore aquaculture operations can require years of intensive breeding efforts, but that it is difficult to obtain consistent research funding over this longer time period. Both USDA and NOAA officials acknowledged that demonstration projects and other lengthy research projects may be difficult to complete within current competitive grant time frames. However, they noted that appropriations for their programs dictate the current length of these grants. USDA officials identified some programs that could be used for long-term research, including competitive grants from the agency’s regional aquaculture centers or the agency’s Agricultural Research Service internal research projects. The regional aquaculture centers set their own priorities and funding allocations, which allows centers to focus on long- term offshore aquaculture research if they so choose. For instance, the regional center in Hawaii has supported research that applies to offshore aquaculture, but none of the other centers currently support research specifically related to offshore aquaculture. A USDA official also suggested that the Agricultural Research Service could support long-term projects if such projects are identified as priorities in future 5-year plans for aquaculture research. The Agricultural Research Service uses feedback from aquaculturists and regulatory agencies, among others, to identify priorities and develop 5-year plans for aquaculture research. Agricultural Research Service officials indicated that the current 5-year plan directs about one-third of the agency’s aquaculture funding to research related to marine species. An effective federal regulatory framework for U.S. offshore aquaculture will be critical to facilitating the development of an economically sustainable industry, while at the same time protecting the health of marine ecosystems. As the Congress considers providing a cohesive legislative framework for regulating an offshore aquaculture industry, we believe it will need to consider a number of important issues. A key first step in developing a U.S. regulatory framework could be designating a lead federal agency that has the appropriate expertise and can effectively collaborate and coordinate with other federal agencies. In addition, setting up clear legislative and regulatory guidance on where offshore aquaculture facilities can be located and how they can be operated could help ensure that these facilities have the least amount of impact on the ocean environment. Moreover, a regulatory framework could also include a process for reviewing the potential environmental impacts of proposed offshore aquaculture facilities, monitoring the environmental impacts of these facilities once they are operational, and quickly identifying and mitigating environmental problems when they occur. Inclusion of an adaptive management approach by which the monitoring process can be modified over time could be useful not only to ensure that the most effective approaches are being used to protect the environment but also to help reduce costs to the industry. In addition, a transparent regulatory process that gives states and the public opportunities to comment on specific offshore aquaculture projects could help allay some of the concerns about the potential environmental impacts of offshore aquaculture. Finally, because the offshore aquaculture industry is in its infancy much remains unknown, and many technical challenges remain, such as the best species to raise offshore and the most effective offshore aquaculture practices. In this context, there may be a role for the federal government in funding the research needed to help answer these questions and facilitate the development of an ecologically-sound offshore aquaculture industry. We provided a draft of this report to the Departments of Agriculture, the Army, and Commerce; and also to the Environmental Protection Agency for review and comment. We received written comments from the Department of Commerce, EPA, and USDA. Overall, the Department of Commerce’s NOAA stated that the report accurately presented information regarding the opportunities and challenges for offshore aquaculture and will contribute to the discussion of environmentally responsible and sustainable offshore aquaculture. NOAA also commented on many issues discussed in our draft report, expressing three areas of concern. NOAA listed several issues it thought were not adequately addressed in the report, including the role aquaculture can play in the development of a safe, sustainable, domestic seafood supply. These issues were outside our scope which was focused on identifying key elements of a federal regulatory framework for offshore aquaculture. NOAA said that by indicating that the environmental impacts of an offshore aquaculture industry are uncertain due to a lack of data specific to such facilities, we were diminishing the importance of the findings from environmental monitoring of the small-scale open ocean aquaculture operations in state waters. We do not agree. Our report acknowledges that the results of environmental monitoring at small-scale open ocean facilities have found modest impacts. However, as larger facilities begin operating, their impacts could become more pronounced. Given that such facilities do not yet exist, it is too early to know what their impacts will be. NOAA said that our report did not adequately discuss offshore shellfish aquaculture. We believe that it did. Most of the policy issues raised in the report apply equally to shellfish and fish aquaculture. In those cases where the issues differ for shellfish and fish aquaculture, we discussed them separately. NOAA also provided technical comments, which we have incorporated in the report as appropriate. NOAA’s comments and our detailed responses are presented in appendix III. EPA provided clarifying language regarding their expertise in regulating water quality related to offshore aquaculture, which we incorporated as appropriate. EPA’s comments are presented in appendix IV. The Department of Agriculture provided two comments on the report. First, USDA mentioned two issues that it did not think were adequately addressed in the report. USDA said that a mechanism for a coordinated federal-wide research framework exists through the Joint Subcommittee on Aquaculture. Our report acknowledges that USDA chairs the interagency Joint Subcommittee on Aquaculture and that the Subcommittee is currently working to update the federal strategic plan for aquaculture research. USDA also said that it has a wide diversity of aquaculture research that is not limited or directed by whether the fish will be raised in fresh, brackish, or salt water. Characterizing all of USDA’s aquaculture-related research activities was not within the scope of our report. Rather, our report is focused on offshore marine aquaculture. As such, we reported what stakeholders told us regarding research related to offshore marine aquaculture. Second, USDA explained that it did not feel that it was appropriate to respond to our questionnaire on offshore aquaculture because it asked for individual opinions related to policy matters. USDA’s comments and our detailed responses are presented in appendix V. The Department of the Army did not have any comments on the report. We are sending copies of this report to the Secretaries of the Army, Agriculture, and Commerce; the Administrator of the EPA; appropriate congressional committees; and other interested parties. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report or need additional information, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. The objective of this report was to identify key issues that should be addressed in the development of an effective regulatory framework for U.S. offshore aquaculture. To address this objective, we reviewed key academic and government-sponsored studies that analyzed proposed regulatory frameworks for offshore aquaculture in federal waters; reviewed existing federal laws that include provisions that are applicable to offshore aquaculture, as well as federal agencies’ regulations, policies, and guidance for marine aquaculture; reviewed laws, regulations, policies, and guidance for marine aquaculture in selected states; visited aquaculture facilities in selected states; and administered questionnaires to, and conducted follow-up structured interviews with, a variety of aquaculture stakeholders. We identified studies on offshore aquaculture regulations by conducting a literature search of online databases for studies and reports from government agencies, nonprofit organizations, industry associations, and academia. We also obtained references from aquaculture experts and agency officials at the National Oceanic and Atmospheric Administration (NOAA), the U.S. Department of Agriculture (USDA), and the U.S. Environmental Protection Agency (EPA). After reviewing various studies, we identified four key studies that examined offshore aquaculture and made recommendations to improve the regulatory framework for offshore aquaculture. These key studies—by the Marine Aquaculture Task Force, the University of Delaware, the Pew Oceans Commission, and the U.S. Commission on Ocean Policy—brought together ocean policy stakeholders to examine, among other things, potential regulatory frameworks for offshore aquaculture. These studies of offshore aquaculture regulations were each developed in the last 5 years with stakeholder input and discuss a variety of issues related to marine aquaculture. Throughout the report, we cite those studies that reached similar conclusions or made similar recommendations on particular policy issues. If a study is not cited for a particular policy issue, it is because the study did not address that issue. To identify existing federal laws that include provisions that are applicable to offshore aquaculture, as well as federal agencies’ regulations, policies, and guidance for marine aquaculture, we interviewed officials from the NOAA’s National Marine Fisheries Service, NOAA’s National Ocean Service, the U.S. Army Corps of Engineers, the EPA, the Department of the Interior’s Fish and Wildlife Service and Minerals Management Service, and the USDA’s Animal and Plant Health Inspection Service. We also reviewed a wide variety of laws to identify federal agencies’ responsibilities and authorities for offshore aquaculture. The laws we reviewed included the Marine Mammal Protection Act, the Endangered Species Act, the Magnuson-Stevens Fishery Conservation and Management Act, the Coastal Zone Management Act, the Rivers and Harbors Act, the National Environmental Policy Act, the National Aquaculture Act of 1980, and the Clean Water Act. We identified relevant state laws, regulations, policies, and guidance for marine aquaculture by interviewing state regulators, environmentalists, representatives of the commercial fishing industry, and representatives of the aquaculture industry in California, Florida, Hawaii, Maine, Texas, and Washington. We selected these states because they currently regulate, or are in the process of developing regulatory frameworks for, aquaculture operations in state waters, and because they represent different geographic areas of the United States. Additionally, we met with state and federal regulators in Hawaii, Maine, and Washington—the states with active nearshore fish aquaculture industries—to discuss state regulatory systems and visited aquaculture facilities in Hawaii and Maine. Based on issues identified in the four key studies, and in our interviews with federal and state officials, we developed a questionnaire on the elements of a regulatory framework for offshore aquaculture. Prior to distributing the questionnaire, we conducted pretests with stakeholders who were similar to those we intended to survey and modified some questions in response to those results. The final questionnaire covered a range of topics including which federal agencies should be responsible for various program administration activities such as program management and agency coordination; how a potential permitting or leasing program should be structured, including to what extent various stakeholders should be involved in the process; opinions on the types of environmental review and monitoring that should be required as part of a regulatory framework; and what should be the priority areas for potentially federally funded aquaculture research. In addition to developing the questionnaire, we identified key aquaculture stakeholders to respond to the questionnaire. We selected these stakeholders because of their expertise in aquaculture at the national, state, or local level; to provide representation across academia, government, industry, and the nonprofit sector; and to provide broad geographic representation throughout the United States. To ensure that our initial list of stakeholders satisfied these criteria, we asked two noted aquaculture experts to review our selections. Both experts submitted three additional names for our consideration—two of which were the same individuals—otherwise they both agreed our list satisfied our criteria. The two individuals recommended by both experts were then included as stakeholders. See appendix II for a list of the stakeholders who responded to our questionnaire. We distributed the questionnaire to 28 stakeholders electronically, asking them to fill it out and return it to GAO. We received 25 responses. Three federal agencies with responsibilities relating to offshore aquaculture—the Department of the Interior, the USDA, and the EPA—did not provide official or complete written responses to the questionnaire. However, we met with officials from these agencies to discuss their responsibilities related to aquaculture. After reviewing the questionnaire responses we received, we conducted follow-up structured interviews with each stakeholder to clarify some responses and to obtain additional details on stakeholders’ responses to some open-ended questions. To identify trends in responses, we analyzed the results of the questionnaire by summarizing responses and producing descriptive statistics using Microsoft Access. In addition, we qualitatively analyzed open-ended responses from the questionnaire and responses from follow-up interviews to provide additional insight into stakeholder views on key issues that should be addressed in the development of a regulatory framework for offshore aquaculture. For purposes of characterizing the results from our questionnaire and follow-up interviews of our 25 stakeholders, we identified specific meanings for the words we used to quantify the results, as follows: “a few” means at least three, and up to five stakeholders; “some” means between 6 and 11 stakeholders; “about half” means 12 to 14 stakeholders; “a majority” of stakeholders and “many” stakeholders both mean 15 to 19 stakeholders; and “most” means 20 stakeholders or more. We conducted this performance audit from April 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. The following stakeholders responded to our questionnaire and participated in follow-up interviews regarding administrative and environmental issues that should be addressed in the development of an effective regulatory framework for U.S. offshore aquaculture: Sue Aspelund, Special Assistant to the Commissioner, Alaska Department of Fish and Game; Brian E. Baird, Assistant Secretary, Ocean and Coastal Policy, California Sebastian M. Belle, Executive Director, Maine Aquaculture Association; John Connelly, President, National Fisheries Institute; Cora Crome, Fisheries Policy Advisor, Office of the Governor, State of Bill Dewey, Manager of Public Affairs, Taylor Shellfish Company; Robin Downey, Executive Director, Pacific Coast Shellfish Growers Kathleen Drew, Executive Policy Advisor, Office of Washington Governor Tim Eichenberg, Former Director, Pacific Regional Office, Ocean John Forster, Ph.D., President and Aquaculture Consultant, Forster Rebecca Goldburg, Ph.D., Senior Scientist, Environmental Defense Fund; Samantha D. Horn Olsen, Aquaculture Policy Coordinator, Maine Department of Marine Resources; Dr. Richard Langan, Director, Atlantic Marine Aquaculture Center and Open Ocean Aquaculture Program, University of New Hampshire; George H. Leonard, Ph.D., Aquaculture Director, Ocean Conservancy; John R. MacMillan, Ph.D., President, National Aquaculture Association; Dr. Larry McKinney, Director of Coastal Fisheries, Texas Parks and Rosamond Naylor, William Wrigley Senior Fellow and Director, Program on Food Security and the Environment, Stanford University; J.E. Jack Rensel, Ph.D., Principal Scientist, Rensel Associates Aquatic Dr. Michael Rubino, Manager, NOAA Aquaculture Program, National Oceanic and Atmospheric Administration; Mitchell Shapson, LL.M., Policy and Legal Analyst, The Institute for Neil Anthony Sims, Co-founder and President, Kona Blue Water Farms, LLC, and Founding Boardmember, Ocean Stewards Institute; Chip Smith, Office of the Assistant Secretary of the Army (Civil Works), Assistant for Environment, Tribal and Regulatory Affairs; Linda L. Smith, Senior Policy Advisor, Office of the Governor, State of Albert G.J. Tacon, Ph.D., Technical Director, Aquatic Farms Ltd.; Paula Terrel, Commercial Fisherman & Fish Farming Issues Coordinator, Alaska Marine Conservation Council; Jose Villalon, Director, Aquaculture Program, World Wildlife Fund; and Sherman Wilhelm, Director, Division of Aquaculture, Florida Department of Agriculture and Consumer Services. The following are GAO’s comments on the Department of Commerce’s letter dated April 25, 2008. 1. The issues identified by NOAA are outside the scope of our review, which was to identify key elements of a federal regulatory framework for offshore aquaculture. 2. We believe our statements regarding the lack of data on the environmental impacts from large-scale commercial offshore aquaculture operations are appropriate. As NOAA points out, these large-scale operations do not yet exist. On page 9 of the report, we stated that environmental monitoring at the existing small-scale research and commercial open-ocean aquaculture operations in Hawaii, New Hampshire, and Puerto Rico has found modest environmental impacts. However, as facilities begin to scale-up, their impacts on the marine environment could become more pronounced. Given the lack of such large facilities to date, it is too early to know what the environmental impacts of large-scale commercial offshore aquaculture facilities will be. 3. We believe that the report adequately discusses offshore shellfish aquaculture within the context of offshore aquaculture. Most of the policy issues raised in the report apply equally to shellfish and fish aquaculture. For instance, the need for clear federal leadership, a sound permitting system, and additional research all apply equally to shellfish and fish. In cases where the issues differ for shellfish and fish aquaculture—such as for environmental monitoring protocols—we discussed shellfish aquaculture separately from fish aquaculture. 4. We are aware of the efforts of the Gulf of Mexico Fishery Management Council to develop a generic amendment to their fishery management plans to establish an offshore aquaculture program in the Gulf of Mexico. While we discuss the roles and responsibilities of fishery management councils on pages 19 and 20, we did not discuss this regional initiative in our report because it was outside our scope of identifying key elements of a federal regulatory framework for offshore aquaculture. The following are GAO’s comments on the Department of Agriculture’s letter dated May 1, 2008. 1. We believe the Joint Subcommittee on Aquaculture was adequately addressed in the report. Specifically, we mentioned on page 11 that USDA chairs the interagency Joint Subcommittee on Aquaculture and that the Subcommittee is currently working to update the federal strategic plan for aquaculture research. In addition, characterizing all of USDA’s aquaculture-related research activities was not within the scope of our report. Rather, our report is focused on offshore marine aquaculture. As such, we reported what stakeholders told us regarding research related to offshore marine aquaculture. In addition to the individual named above, Stephen D. Secrist, Assistant Director; Leo G. Acosta; Nancy Crothers; Kathleen Gilhooly; Janice M. Poling; Katherine Raheb; Jerry Sandau; Julie E. Silvers; Barbara Steel- Lowney; Shana Wallace; and Monica L. Wolford, made significant contributions to this report.
U. S. aquaculture--the raising of fish and shellfish in captivity--has generally been confined to nearshore coastal waters or in other water bodies, such as ponds, that fall under state regulation. Recently, there has been an increased interest in expanding aquaculture to offshore waters, which would involve raising fish and shellfish in the open ocean, and consequently bringing these types of operations under federal regulation. While the offshore expansion has the potential to increase U.S. aquaculture production, no comprehensive legislative or regulatory framework to manage such an expansion exists. Instead, multiple federal agencies have authority to regulate different aspects of offshore aquaculture under a variety of existing laws that were not designed for this purpose. In this context, GAO was asked to identify key issues that should be addressed in the development of an effective regulatory framework for U.S. offshore aquaculture. In conducting its assessment, GAO administered a questionnaire to a wide variety of key aquaculture stakeholders; analyzed laws, regulations, and key studies; and visited states that regulate nearshore aquaculture industries. Although GAO is not making any recommendations, this review emphasizes the need to carefully consider a wide array of key issues as a regulatory framework for offshore aquaculture is developed. Agencies that provided official comments generally agreed with the report. In developing a regulatory framework for offshore aquaculture, it is important to consider a wide array of issues, which can be grouped into four main areas. (1) Program administration: Addressing the administration of an offshore program at the federal level is an important aspect of a regulatory framework. Stakeholders that GAO contacted and key studies that GAO reviewed identified specific roles and responsibilities for federal agencies, states, and regional fishery management councils. Most stakeholders and the studies agreed that the National Oceanic and Atmospheric Administration (NOAA) should be the lead federal agency and emphasized that coordination with other federal agencies will also be important. In addition, stakeholders and some of the studies recommended that the states play an important role in the development and implementation of an offshore aquaculture program. (2) Permitting and site selection: It will also be important to establish a regulatory process that clearly identifies where aquaculture facilities can be located and for how long. For example, many stakeholders stated that offshore facilities will need the legal right, through a permit or lease, to occupy an area of the ocean. However, stakeholders varied on the specific terms of the permits or leases, including their duration. Some stakeholders said that longer permits could make it easier for investors to recoup their investments, while others said that shorter ones could facilitate closer scrutiny of environmental impacts. This variability is also reflected in the approaches taken by states that regulate aquaculture in their waters. One state issues 20-year leases while another issues shorter leases. Stakeholders supported various approaches for siting offshore facilities, such as case-by-case site evaluations and prepermitting some locations. (3) Environmental management: A process to assess and mitigate the environmental impacts of offshore operations is another important aspect of a regulatory framework. For example, many stakeholders told GAO of the value of reviewing the potential cumulative environmental impacts of offshore operations over a broad ocean area before any facilities are sited. About half of them said that a facility-by-facility environmental review should also be required. Two states currently require facility-level reviews for operations in state waters. In addition, stakeholders, key studies, and state regulators generally supported an adaptive monitoring approach to ensure flexibility in monitoring changing environmental conditions. Other important areas to address include policies to mitigate the potential impacts of escaped fish and to remediate environmental damage. (4) Research: Finally, a regulatory framework needs to include a federal research component to help fill current gaps in knowledge about offshore aquaculture. For example, stakeholders supported federally funded research on developing (1) alternative fish feeds, (2) best management practices to minimize environmental impacts, (3) data on how escaped aquaculture fish might impact wild fisheries, and (4) strategies to breed and raise fish while effectively managing disease. A few researchers said that the current process of funding research for aquaculture is not adequate because the research grants are funded over periods that are too short to accommodate certain types of research, such as hatchery research and offshore demonstration projects.
The Rail Passenger Service Act of 1970 created Amtrak as the nation’s intercity passenger railroad. Prior to Amtrak’s creation, intercity passenger service was provided by a number of individual railroads, which had lost money, especially after World War II. The act, as amended, gave Amtrak a number of goals, including providing modern, efficient intercity passenger rail service; giving Americans an alternative to automobiles and airplanes to meet their transportation needs; and minimizing federal operating subsidies. As of June 1999, Amtrak provided intercity passenger service along 42 routes that include most states. Like all major national intercity rail services in the world, Amtrak receives substantial government support. From 1971 through June 1999, the federal government provided Amtrak with nearly $23 billion in financial assistance. However, in December 1994, at the direction of the administration, Amtrak established the goal of eliminating its need for federal operating subsidies, that is, achieving operational self-sufficiency, by fiscal year 2002. In addition, the Amtrak Reform and Accountability Act of 1997 authorized appropriations for Amtrak’s operating and capital expenses through fiscal year 2002 but prohibited Amtrak from using federal funds for operating expenses, except for an amount equal to excess Railroad Retirement Tax Act payments after 2002. In fiscal year 2002, Amtrak expects to spend only $185 million (its estimated payments to the railroad retirement system in excess of the retirement benefits for Amtrak employees) of federal funding for expenses other than capital projects. To meet the goal of operating self-sufficiency and respond to continually growing losses and a widening gap between operating deficits and federal operating subsidies, Amtrak developed a series of strategic business plans. By following these plans, Amtrak has attempted to increase revenues and control costs through such actions as expanding mail and express service, adjusting routes and service frequency, and reorganizing into strategic business units. Its Board of Directors approved Amtrak’s most recent strategic business plan in October 1998. Historically, Amtrak received separate federal appropriations for operating expenses and capital improvements. For fiscal year 1999, Amtrak received a single capital appropriation of $609 million instead of separate appropriations for operating and capital assistance. However, the conference report accompanying the appropriation provided that Amtrak could use appropriated funds for the maintenance of equipment (an operating expense) in addition to traditional capital investments. The Congress also provided Amtrak with financial assistance through the Taxpayer Relief Act of 1997. This act made a total of about $2.2 billion available to Amtrak in fiscal years 1998 and 1999 to acquire capital improvements and pay for the maintenance of existing equipment, among other things. The Amtrak Reform and Accountability Act of 1997 made certain reforms to Amtrak’s operations. Among other things, the act (1) eliminated existing statutory and contractual labor protection arrangements as of May 31, 1998, and required negotiations over new arrangements; (2) repealed the statutory ban on contracting out work when it would result in employee layoffs and made contracting out part of the collective bargaining process (except for food and beverage service, for which contracting out was already allowed); and (3) placed a $200 million cap on the aggregate amount that Amtrak and others must pay rail passengers for all claims (including claims for punitive damages) arising from a single accident or incident. The act also established an independent council—the Amtrak Reform Council—to evaluate Amtrak’s performance and make recommendations for cost containment, productivity improvements, and financial reforms. If at any time more than 2 years after the enactment of the act and the implementation of a financial plan for operating within authorized funding levels, the Council finds that Amtrak is not meeting its financial goals or that Amtrak will require federal operating funds after December 2002, then the Council is to submit to the Congress, within 90 days, an action plan for a restructured national intercity passenger rail system. In addition, if the above events occur, Amtrak is required to develop and submit an action plan for its liquidation. The act also eliminated the requirement that Amtrak issue preferred stock to the Department of Transportation in the value of federal appropriations received. As a result, beginning with its fiscal year-end 1998 audited financial statements, Amtrak, following guidance from its external auditors, recorded a significant amount of federal financial assistance as revenues instead of preferred shareholder equity. In addition, a significant amount of the federal funds made available by the Taxpayer Relief Act was also recorded as revenues. One effect of this situation is that Amtrak’s fiscal year 1998 financial statements are not comparable to previous financial reports unless certain adjustments are made. In this report, we present net loss and working capital amounts that exclude the amount of federal assistance that Amtrak’s audited financial statements include as revenues or current assets in 1998. These adjustments allow us to better compare Amtrak’s net loss and working capital positions over time. Amtrak had made some progress in reducing its net losses in recent years—from about $833 million in fiscal year 1994 to $762 million in fiscal year 1997. However, Amtrak’s net loss (adjusted to exclude $577 million of federal funds that its audited financial statements count as revenues)increased to $930 million in fiscal year 1998. (See fig. 1.) This amount is the largest net loss in the last 10 years. One of the reasons for the increase is that the 1998 figure includes retroactive payments attributable to labor negotiations concluded by the end of 1998. But, even when the roughly $106 million of such labor payments are not included in the net loss, the net loss is still $824 million, $62 million more than in fiscal year 1997. Amtrak officials stated that the increase in net loss is also due to an increase in capital investment that resulted in increased depreciation expenses. Specifically, depreciation expenses, a noncash item, increased by $52 million in 1998—from $242 million in fiscal year 1997 to $294 million in fiscal year 1998. Amtrak expects that depreciation expenses will grow by another $66 million in fiscal year 1999 and by additional amounts in subsequent years as it makes additional capital investments. While an increase in depreciation—which reflects the amount of capital equipment that is consumed and must be replaced in future years—increases net loss, Amtrak points out that investments resulting in increased depreciation expenses are expected to have positive effects in the future, such as increasing revenues, reducing costs, and eliminating the need for federal operating support. In October 1998, Amtrak estimated that the net loss for fiscal year 1999 will be $930 million. However, through April 1999, Amtrak’s net loss for the current fiscal year is $11.4 million less than expected. Another measure of Amtrak’s overall financial condition is its working capital (current assets less current liabilities). Working capital measures a corporation’s ability to pay its bills when due. Amtrak’s working capital deficit (adjusted to exclude $647 million in short-term investments and related interest resulting from unspent Taxpayer Relief Act funds) at the end of fiscal year 1998 was about $400 million. This amount is $100 million worse than the $300 million working capital deficit Amtrak recorded at the end of fiscal year 1997 and is the worst such deficit in at least the last 10 years. Figure 2 shows the degree to which working capital balances have fallen over the past 4 years. Amtrak continues to need to borrow money to pay its current-year operating expenses, including those for payroll, fuel, ticket stock, and food. At the end of fiscal year 1997, Amtrak had outstanding borrowing of $75 million to meet its operating expenses. At the end of fiscal year 1998, the amount of outstanding borrowing needed to meet operating expenses had fallen to $50 million. This $50 million in year-end borrowing was half of what Amtrak had estimated at the beginning of the fiscal year. However, at the end of fiscal year 1999, Amtrak estimates, it will need to have $100 million in borrowing to meet its operating expenses. Additionally, Amtrak plans to have short-term borrowing of $100 million outstanding at the end of fiscal year 2000. To help its cash flow, Amtrak is seeking legislation specifically authorizing it to use its fiscal year 2000 capital appropriation to pay for a wider variety of maintenance expenses than in fiscal year 1999. This would be similar to the flexibility allowed recipients of federal transit financial assistance. For fiscal year 2000, Amtrak is requesting the authority to use its capital appropriation for maintenance-of-way expenses (e.g., costs for maintaining tracks) in addition to maintenance-of-equipment expenses, as permitted in fiscal year 1999. Without this authority, Amtrak has stated that it will not be able to use existing cash to cover $50 million of its operating expenses in fiscal year 2000. As of June 1999, Amtrak had not developed a way to meet its financial obligations if the Congress does not allow this flexibility. Amtrak’s October 1998 strategic business plan does not anticipate that the corporation will use any federal subsidies for operating expenses (other than for excess railroad retirement expenses) in fiscal year 2002—1 year earlier than requested by the administration and specified in the Amtrak Reform and Accountability Act of 1997. However, considerable uncertainty exists about whether Amtrak will be able to achieve its targets for revenues and expenses for several key business plan actions, and Amtrak historically has not met its financial goals for increasing revenues and reducing expenses. Amtrak’s efforts are pointed toward achieving operating self-sufficiency by fiscal year 2002. To do this, Amtrak’s strategic business plan focuses on reducing what it calls its “budget gap,” which Amtrak defines as the corporation’s net loss less capital-related expenses, including depreciation of its physical plant (such as locomotives, cars, and stations), other noncash expenses, and expenses from its program to progressively overhaul railcars (i.e., to conduct a limited overhaul of cars each year rather than a single comprehensive overhaul every several years). In essence, the budget gap represents expenses not funded by its revenues or its capital program. According to Amtrak, its budget gap fell by $18 million in fiscal year 1998—from $512 million in fiscal year 1997 to $494 million in fiscal year 1998 after an adjustment for the cost of retroactive labor payments is made. (See fig. 3.) Even though Amtrak’s audited financial statements allocated the full $106 million amount of the retroactive payments for recently negotiated labor agreements to fiscal year 1998 expenses, Amtrak officials, in calculating the budget gap, allocated the amounts over the years for which those payments actually accrued ($35 million in fiscal year 1996 and in fiscal year 1997 and $36 million in fiscal year 1998). Amtrak officials told us that they believe that such an allocation is a more appropriate methodology for presenting its financial situation. The result of this allocation improves Amtrak’s fiscal year 1998 budget gap by $70 million. Amtrak’s October 1998 strategic business plan estimates that the budget gap will be reduced by another $10 million in fiscal year 1999. However, even with these improvements in Amtrak’s budget gap, Amtrak must still reduce its losses substantially if it is to become operationally self-sufficient by the end of fiscal year 2002. In the next 4 fiscal years, Amtrak must reduce its budget gap by $309 million, from $494 million to an amount equivalent to excess railroad retirement payments (estimated at $185 million in fiscal year 2002). This needed improvement by 2002 is about 5 times the $60 million improvement Amtrak was able to achieve in the previous 4 fiscal years, 1995 through 1998. Another issue in Amtrak’s calculation of the budget gap is the treatment of progressive overhaul expenses. Amtrak does not include these expenses in its calculation of the budget gap even though they are considered to be operating expenses under generally accepted accounting principles. As described, the Amtrak Reform and Accountability Act of 1997 prohibits Amtrak from using federal funds for operating expenses, except for an amount equal to excess Railroad Retirement Tax Act payments after 2002. According to Amtrak officials, while generally accepted accounting principles require the recording of such spending as operating expenses, Amtrak funds progressive overhauls through its capital program and therefore believes that the costs for them should be counted as capital costs. If progressive overhauls are included in the calculation of the budget gap, the gap increases by $12 million in fiscal year 1998—from $549 million in fiscal year 1997 to $561 million in fiscal year 1998—and in fiscal year 1999 will be $560 million. Under its October 1998 strategic business plan, Amtrak plans to reach financial health by emphasizing business growth, that is, primarily by increasing revenues. Amtrak expects significant revenue increases from implementing new high-speed rail service between Boston and Washington, D.C., and expanding its express service (delivery of higher-value, time-sensitive goods). Amtrak also plans to increase its revenues and control costs by developing a market-based intercity route network that aligns its passenger service more closely with customer demand (adding trains to certain routes or starting new service where appropriate, for instance). Amtrak does not plan to eliminate any routes or services in fiscal year 1999 but has not made any long-term decisions about routes. (In 1997, 39 of Amtrak’s 40 routes were unprofitable when train, route, and system costs are included.) In addition, by developing and implementing service standards (such as improving service to passengers), Amtrak expects to increase ridership (and revenues) through higher-quality and more consistent service. Finally, Amtrak plans to contain costs primarily by reducing the costs of electric power in the Northeast Corridor and enhancing productivity in a number of ways throughout its system. Amtrak estimates that its business plan initiatives will result in net financial improvements of $1.6 billion for fiscal years 1999-2002. (See table 1.) In particular, it expects to begin obtaining most revenue increases and cost savings beginning in fiscal year 2000. For example, over the period covered by the plan, Amtrak expects that its initiative for express service will generate a cumulative net impact of about $60 million. Of this $60 million, Amtrak expects to obtain about $56 million between fiscal years 2000 and 2002. Amtrak also estimates that its new high-speed rail service, which will begin in fiscal year 2000, will have a $408 million net impact during the period. Over one-third ($631 million) of the total net impact of $1.6 billion is expected to occur in fiscal year 2002, the last year of the plan. Table 1 also shows that the expected financial impact from six key initiatives will account for nearly 60 percent of the expected net impact—$917 million. The remaining benefits come from hundreds of individual actions outlined in Amtrak’s business plan. Overall, Amtrak projects that if it achieves the financial benefits associated with these initiatives, including 100 percent of the $631 million in financial improvements it projects for fiscal year 2002, it will gradually reduce its reliance on federal operating assistance, and achieve operating self-sufficiency in 2002. All plans are subject to uncertainty and Amtrak’s estimates for six key initiatives reflect this uncertainty. First, Amtrak plans to align its service to better meet customer demand, referred to as implementing a market-based network. Amtrak expects to generate $105 million in net impact over the period by such actions as serving currently unserved markets that have good demand potential. According to Amtrak officials, for the most part this estimate was based on senior officials’ judgment of changes in revenues and expenses resulting from analysis of the potential for partnerships with states and local governments in certain transportation corridors. However, Amtrak did not supply us with any information on how it derived the $105 million amount. Second, Amtrak expects to generate another $105 million in net impact by implementing a variety of service standards designed to ensure a consistent, high-quality product. These service standards will be focused on encouraging employees to provide consistent, high-quality service; improving customer-to-staff ratios; addressing customers’ complaints and resolving them as quickly as possible; and instituting a service guarantee program (such as providing a transportation credit) if service does not meet established standards. Overall, Amtrak expects that these efforts will increase revenues by generating additional ridership and reduce operating costs by lowering employees’ absenteeism. However, the service standards had not been defined at the time the $105 million estimate was made. Instead, Amtrak officials told us that the $105 million estimate was based on extensive analysis completed by senior management, including benchmarking against corporations that had implemented similar types of programs, such as the United States Postal Service, Ritz Carlton, Sears, and Continental Airlines. Amtrak then estimated that it could have a net impact of $59 million per year from (1) a reduction in occasions in which customers will not ride Amtrak again as the result of poor, inconsistent service ($10 million per year); (2) fare increases justified by higher-quality, more consistent service ($20 million per year); (3) increases in employees’ productivity ($23 million per year); and (4) reductions in absenteeism ($6 million per year). An Amtrak official told us that Amtrak chose to be conservative in estimating a $105 million in savings over the life of the 4-year plan, rather than utilizing the full $59 million per year in its estimate of savings. Third, Amtrak’s plan contains a broad category of undefined actions referred to as “undefined initiatives” and “planned management actions to be developed.” These categories represent $210 million in net impact for which Amtrak had not identified specific initiatives or developed any plan of action at the time the plan was approved. The amounts were placeholders to balance the yearly budgets. According to Amtrak officials, these initiatives represent the gap that Amtrak must fill even if it successfully implements all of its other business plan actions. Amtrak intends to achieve this net impact primarily through cost savings that it will identify on an ongoing basis. By June 1999, Amtrak officials had identified actions representing a net impact of about $49 million, reducing the dollar amount of actions yet to be defined to about $161 million. Fourth, Amtrak’s plan estimates $408 million in net impact from implementing high-speed rail service in the Northeast Corridor. This estimate was based on an extensive ridership forecast. However, in November 1998 the Department of Transportation’s Office of Inspector General questioned $192 million of the gross revenue projections for fiscal years 1999 through 2002. In particular, the Inspector General’s review indicated that Amtrak was too optimistic regarding the system’s ability to generate ridership in the early years of the forecast. While Amtrak disagreed with the Inspector General’s assessment of the expected gains in ridership in the early years, this type of disagreement highlights the inherent uncertainty in estimating revenues from high-speed rail service. Fifth, Amtrak estimates that its express service will result in a net impact of $60 million, which Amtrak officials stated was based on their assessment of the market potential for this service. Amtrak has made some initial steps in this area. For example, it has entered into a partnership with the United Parcel Service and four other carriers to provide time-sensitive express service generating an estimated $2.9 million in annual revenues (less than 1 percent of Amtrak’s estimate of revenues from express service over the period covered by the business plan). However, Amtrak is new to this area and does not yet have a track record on which to base its projections. In addition, Amtrak does not yet have long-term contracts to support much of the projected financial benefit. Furthermore, much of the expected benefit depends on Amtrak’s expanding its fleet of express equipment through acquisition, leasing, or other arrangements, most of which still need the approval of Amtrak’s Board of Directors. Thus, while it is possible that Amtrak may achieve its net revenue goal, many important actions remain to be taken. Finally, Amtrak plans to have net savings of $29 million from buying electric power in the Northeast Corridor at wholesale rates. Currently, Amtrak buys electricity at retail rates for its own use and for resale to commuter railroads owned by state and local governments. Its estimated cost savings were based on negotiations with a utility under which Amtrak would purchase power at a wholesale price. However, the Federal Energy Regulatory Commission denied a request to treat Amtrak as a government entity that would be exempt from the Federal Power Act’s restrictions on wholesale power purchases. Consequently, Amtrak now plans to seek enactment of legislation that would designate the railroad as a power wholesaler. Amtrak’s estimate of savings is contingent upon obtaining this legislation by September 30, 2000. In the meantime, Amtrak officials stated that Amtrak will help cut its electricity costs by using a competitive bid process allowed under deregulation, including “retail choice” programs in Pennsylvania, New York, Massachusetts, Rhode Island, and Connecticut for electric power purchases. However, this approach will not achieve Amtrak’s estimated $29 million in savings. Amtrak has been unable to achieve its planned budget gap in any of the last 4 years. Specifically, from fiscal year 1995 through fiscal year 1998, Amtrak’s budget gap was, in total, $285 million more than planned, as shown in table 2. This result occurred primarily because Amtrak’s expenses were significantly higher than planned. During the 4-year period, Amtrak’s revenues were $34 million less than planned, and expenses were $251 million more than planned. As a result, Amtrak’s actual budget gap was higher than it expected. However, the table also shows that the difference between the planned budget gap and the actual budget gap has been decreasing since fiscal year 1996. Moreover, through April of the current fiscal year, the budget gap is about $10 million less than what Amtrak had estimated for the first 7 months of fiscal year 1999. The table shows that Amtrak’s revenues exceeded planned amounts for 2 of the 4 years. Fiscal year 1998 revenues were significantly lower than planned (by $186 million), primarily because of lower than expected express service business. In contrast, expenses were greater than planned in fiscal years 1995 through 1997 but much lower than planned (by $144 million) in fiscal year 1998. The better than planned results were primarily due to lower than expected train operation costs, such as lower than expected fuel costs. If Amtrak experiences difficulties in controlling expenses over the next 4 years, it will have to generate significantly more revenues than planned in order to achieve operating self-sufficiency. Current and planned annual federal funding and reforms contained in the Amtrak Reform and Accountability Act of 1997 are likely to have little short-term impact on improving Amtrak’s overall financial condition. In the short term, continued annual federal funding will help Amtrak cover a significant portion of its operating expenses for maintenance and help meet its cash flow needs. However, in the long term, using these funds for maintenance expenses will limit the use of funding for capital investments that would help Amtrak reduce its costs and increase its revenues in the future. Finally, although the act allowed Amtrak greater flexibility in its business operations, these reforms are not likely to provide immediate financial benefits. According to its October 1998 strategic business plan, Amtrak ultimately plans to use $559 million (about 92 percent) of its $609 million fiscal year 1999 capital appropriation to pay for the maintenance of equipment—a use specifically referred to in the conference report accompanying the appropriation. Most of the remaining $50 million will be used to pay principal on its capital debt. Amtrak plans to continue using a large portion of the appropriations that it expects to receive from fiscal year 2000 through fiscal year 2002 for maintenance expenses (including progressive overhauls)—in total, about $1 billion. This $1 billion represents nearly two-thirds of the $1.6 billion Amtrak expects to receive through annual federal capital appropriations. The short-term benefits of using substantial portions of its capital appropriations for maintenance carry long-term consequences. Capital investments play a critical role in supporting Amtrak’s business plan and ultimately in building and maintaining Amtrak’s viability. However, as we reported last year, Amtrak had a $500 million shortfall between its estimated capital needs and available funding. Using federal funds for maintenance will limit the funds available for needed capital investments that would help Amtrak reduce its costs and increase its revenues in the future. By using its federal appropriations to cover maintenance expenses, Amtrak may widen this gap between its stated capital needs and expected available funds. However, in fiscal year 1999, Amtrak has plans to use $758 million of the $2.2 billion it received through the Taxpayer Relief Act of 1997 for capital improvements in addition to the $558 million its Board of Directors approved for capital investments in fiscal year 1998. Amtrak does not yet have a capital plan detailing its capital investments for the remainder of these funds. Amtrak has pledged to ultimately use all of the $2.2 billion for high-return capital initiatives and for certain mandatory and tactical projects. In the short term, Amtrak plans to temporarily use a significant portion of these funds for certain authorized expenses for equipment maintenance because (under an agreement with the administration) the railroad will not draw down all of its fiscal year 1999 federal capital appropriation in the year in which the funds are appropriated. Amtrak expects that as its revenues increase as a result of its strategic business plan initiatives, it will repay the borrowed Taxpayer Relief Act funds. Finally, after 2002, questions about whether Amtrak is truly operationally self-sufficient would arise if Amtrak’s capital appropriations are made available and used for maintenance expenses, which are operating expenses. On the other hand, if Amtrak is not permitted to continue to use appropriated funds for maintenance, then it would have to look for additional ways to increase revenues and reduce expenses. The Amtrak Reform and Accountability Act of 1997 was intended to help improve Amtrak’s financial condition by making reforms to Amtrak’s operations to help the railroad better control and manage its costs. Among the act’s reforms aimed at improving Amtrak’s financial condition were provisions that eliminated, as of May 31, 1998, existing statutory and contractual labor protection arrangements that provided up to 6 years of compensation for employees who lost their jobs because of the discontinuance of service on a route or such other covered actions and required negotiation over new arrangements; repealed the statutory ban on contracting out work that would result in employee layoffs (except for food and beverage service, which could already be contracted out), incorporated the ban into existing collective bargaining agreements, and made contracting out subject to negotiation by November 1999; and placed a $200 million cap on the aggregate amount that Amtrak and others must pay rail passengers for all claims (including claims for punitive damages) arising from a single accident or incident. As we reported in 1998, the reforms contained in the act may have little, if any, immediate effect on Amtrak’s financial performance for several reasons. First, regarding labor protection arrangements, after 10 negotiating sessions, Amtrak and its unions agreed to submit the matter to binding arbitration. As of early June 1999, the panel of arbitrators had not reached a decision. Second, Amtrak officials do not expect to address contracting out work unrelated to food and beverage service before November 1, 1999. The officials believe the repeal of the ban may provide long-term flexibility, including flexibility in union negotiations and in controlling costs, but at this time cannot predict what changes may result from these negotiations and what the effect on costs may be. Finally, Amtrak believes the limit of $200 million per accident for rail passenger liability claims may have a limited financial effect because this cap is significantly higher than amounts Amtrak has historically paid on such claims. This reform may not result in measurable financial savings as much as in additional flexibility in negotiating with labor unions and in addressing the freight railroads’ concerns over such issues as liability payments. The act also made other changes that have the potential for a significant impact on Amtrak’s future. For example, it established an independent council—the Amtrak Reform Council—to evaluate Amtrak’s performance and make recommendations for cost containment, productivity improvements, and financial reforms. If, at any time more than 2 years after the enactment of the act and implementation of a financial plan for operating within authorized funding levels, the Council finds that Amtrak is not meeting its financial goals or that it will require operating funds after December 2002, then the Council is to submit to the Congress, within 90 days, an action plan for a restructured national intercity passenger rail system. In addition, if the above events occur, Amtrak is required to develop and submit an action plan for its liquidation. Amtrak has focused its strategic business plan on the near-term goal of becoming operationally self-sufficient by 2002—a goal established by the administration and the Congress. Amtrak’s plan is ambitious; and, to its credit, it is currently somewhat ahead of the plan’s financial goals. Yet the overwhelming bulk of the expected financial benefits of the plan are still to come—with most to be achieved in the final year of the plan. Our concerns are two-fold. First, several aspects of the plan are subject to considerable uncertainty, including, but not limited to, identifying over $160 million in productivity and other improvements during the remaining 3 years of the plan. Second, Amtrak has a history of not meeting its financial goals, and the current 4-year plan anticipates achieving about 5 times as much in financial improvements as Amtrak was able to achieve through its business plans over the previous 4 years. We recognize that all plans by their very nature are subject to uncertainty. However, given the uncertainties in the current plan, Amtrak’s history of missing financial goals, and the magnitude of the savings still to be achieved, it is difficult to be confident that Amtrak will become operationally self-sufficient within the next 3 years. The stakes are high: The Congress gave Amtrak until the end of fiscal year 2002 to reach operational self-sufficiency and required that plans for restructuring and liquidating Amtrak be prepared if the railroad does not meet this goal. We provided Amtrak and the Federal Railroad Administration within the Department of Transportation copies of a draft of this report for their review and comment. We met with Amtrak officials, including the Vice-President for Government and Public Affairs and the Controller. In general, Amtrak believed that the draft report contained inappropriate analyses and mischaracterized how Amtrak derived selected expected financial benefits in its strategic business plan. Amtrak believes that the preferred measure of progress toward achieving operating self-sufficiency is not net loss but rather its “budget gap,” an Amtrak financial measure that excludes expenses funded from its capital program. Amtrak apparently misunderstood the purpose of our work. As stated in the draft report, the objective of this portion of our work was to assess Amtrak’s financial performance in 1998. The work was not limited to assessing progress in meeting its goal of operational self-sufficiency. Consequently, a discussion of financial performance that is limited to Amtrak’s budget gap would be inappropriate and incomplete. We have clarified the objective and the discussion of this topic in the report. Amtrak also disagreed with our inclusion of expenses for progressive overhauls in our discussion of Amtrak’s progress in achieving operational self-sufficiency. Amtrak stated that while generally accepted accounting principles require Amtrak to record such spending as operating expenses, it funds progressive overhauls through its capital program and therefore believes that they should be counted as capital costs. As a result, in Amtrak’s view, the costs of progressive overhauls would be excluded from the calculation of Amtrak’s progress toward achieving operational self-sufficiency by 2002. As discussed in our report, generally accepted accounting principles consider progressive overhaul expenses to be operating expenses. As a result, we have not revised how these costs are categorized. We have added to this report Amtrak’s rationale for excluding progressive overhaul expenses from its budget gap and show the impact of both including and excluding it. Amtrak stated that we did not recognize that the higher net loss in fiscal year 1998 was partially the result of higher depreciation expenses resulting from investments and that these investments will have positive impacts for ridership and revenues in the future. We agree and have included information regarding the impact that Amtrak’s capital investments have had on its operating expenses and net loss. We have also added a discussion of the important role that these investments will have on Amtrak’s ability to increase revenues in the future. Amtrak officials stated that our analysis of actual versus planned financial results for fiscal year 1998 was inappropriate because we used Amtrak’s original strategic business plan issued in September 1997 rather than its revised plan issued in March 1998. They stated that the revised March plan is a better benchmark to judge its fiscal year performance because it reflects business changes resulting from the enactment of the Amtrak Reform and Accountability Act and the Taxpayer Relief Act in 1997, as well as other factors, such as significant management changes. We disagree. We believe that the most appropriate benchmark for evaluating yearly performance is the plan approved at the beginning of the fiscal year. Revising a plan 6 months into a fiscal year significantly reduces the uncertainty inherent in preparing an initial estimate of performance. In addition, the March 1998 plan was an exception—Amtrak typically produces a plan in September or October of each year. Finally, while we agree that the enactment of the two laws and Amtrak’s change in leadership were significant events for Amtrak, the primary financial revisions contained in the March 1998 plan were reductions in revenues associated with Amtrak’s mail and express service initiatives. These reductions were primarily due to revised assumptions about the market for express service, rather than a direct result of the above mentioned events. Amtrak also objected to our characterization of how it derived estimates for the expected financial benefits associated with the initiatives to (1) implement service standards and (2) align its route network to meet customer demand. Amtrak stated that the estimates were based on extensive analyses completed by senior management officials and included benchmarking with other service providers. We believe that the characterization in our draft report was wholly consistent with the information that we obtained from top financial officials and others within Amtrak. In commenting on our draft report, Amtrak officials supplied us with a rationale for how they derived the estimate for financial benefits associated with implementing service standards. We have added this material to our report. The officials did not supply any additional information on how they derived the estimate for the expected financial benefits associated with aligning Amtrak’s route network to meet customer demand. Based on the additional information received, we revised our report to characterize how Amtrak developed its expected financial benefits as using “professional judgment” rather than making “best guesses.” Finally, Amtrak officials offered a number of technical and clarifying comments that we incorporated throughout the report, where appropriate. In commenting on our draft report, the Department of Transportation stated that when the goal of achieving operational self-sufficiency was established, the administration understood that meeting the goal would not be easy. (See app. I.) It believes that Amtrak’s strategic business plan provides a credible path for achieving operational self-sufficiency. The Department also stated that it believes Amtrak is moving in the right direction and is currently ahead of its financial targets identified in the corporation’s strategic business plan. It stated that our report should recognize Amtrak’s increased investment in traditional capital projects. As discussed above, we have added this information to our report. The Department also commented that the Taxpayer Relief Act of 1997 authorizes Amtrak to use Taxpayer Relief Act funds for some maintenance activities. Although the draft report provided to the Department included this fact, we have added to our report a further reference to this allowed use of Taxpayer Relief Act funds. To determine the status of Amtrak’s financial condition, we reviewed its fiscal year 1998 annual report, October 1998 strategic business plan, and fiscal year 2000 legislative report and federal grant request. We also interviewed Amtrak’s Chief Financial Officer and other financial systems officials. To obtain a historical perspective on Amtrak’s financial condition, we also reviewed Amtrak’s annual reports for fiscal years 1994 through 1997. To provide information on Amtrak’s current strategic plan for obtaining operating self-sufficiency, we reviewed its current and previous strategic business plans and the Department of Transportation’s Office of Inspector General’s Summary Report on the Independent Assessment of Amtrak’s Financial Needs Through Fiscal Year 2002. We also discussed the current strategic business plan with a variety of Amtrak officials, including officials in its Intercity and Northeast Corridor strategic business units and Amtrak’s Chief Financial Officer. We did not independently verify the accuracy of Amtrak’s financial data in its current strategic business plan. Finally, to provide information on the extent to which federal funding and recently enacted legislative reforms will help Amtrak resolve its financial problems, we first reviewed the Amtrak Reform and Accountability Act of 1997, the Taxpayer Relief Act of 1997, and Amtrak’s fiscal year 1999 appropriation. We then discussed the likely impact of these acts with Amtrak officials. We also reviewed Amtrak’s proposed capital plan and interviewed Amtrak officials about its contents. We conducted our review from January 1999 through June 1999 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will send copies of this report to interested congressional committees; George D. Warrington, the President and Chief Executive Officer of Amtrak; the Honorable Rodney E. Slater, the Secretary of Transportation; the Honorable Jolene M. Molotoris, the Administrator of the Federal Railroad Administration; the Honorable Jacob J. Lew, the Director of the Office of Management and Budget; and Gil Carmichael, the Chairman of the Amtrak Reform Council. We will also make copies available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-3650. Key contributors to this report were Ruthann Balciunas, Catherine Colwell, David Lichtenfeld, and James Ratzenberger. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO followed up on its report on Amtrak's financial performance, focusing on: (1) Amtrak's overall financial performance in fiscal year (FY) 1998; (2) the prospects for Amtrak to meet its financial goals for operating self-sufficiency outlined in its most recent strategic business plan; and (3) the extent to which current and anticipated federal funding and recently enacted legislative reforms aimed at helping Amtrak better control its costs are likely to help improve its financial condition. GAO noted that: (1) Amtrak's overall losses increased in FY 1998 after several years of improvement; (2) in FY 1998, Amtrak's net loss was $930 million, the largest loss in the last 10 years; (3) by comparison, Amtrak's net loss in FY 1997 was $762 million; (4) Amtrak has made progress in reducing its reliance on federal operating support; (5) however, between now and 2002, it needs to achieve about 5 times as much in financial improvements as it has been able to achieve over the past 4 years to reach operational self-sufficiency; (6) Amtrak's strategic business plan, approved by its Board of Directors in October 1998, estimates that ongoing and planned initiatives will result in a cumulative net impact of $1.6 billion from FY 1999 through FY 2002, primarily through increases in revenues as a result of taking business plan actions; (7) however, uncertainty surrounds Amtrak's ability to achieve this net impact and to reach operational self-sufficiency by FY 2002; (8) furthermore, Amtrak's expectations to increase revenues through other initiatives are based on critical assumptions that have yet to be tested in the marketplace; (9) current and anticipated annual federal funding and recently enacted reforms aimed at helping Amtrak better control its costs will likely have little short-term impact on improving its overall financial condition; (10) Amtrak plans to use nearly $1 billion of the $1.6 billion its expects to receive in federal capital appropriations over the next 3 fiscal years for maintenance rather than capital improvements; (11) while maintenance is important for preserving assets and Amtrak's FY 1999 capital appropriation could be used for equipment maintenance, Amtrak's plans to continue to use capital appropriations in this way means it will forgo or delay capital investment projects that could increase future revenues and reduce future costs; (12) however, Amtrak's Board of Directors has approved plans for $1.3 billion of capital improvements from the $2.2 billion made available to it through the Taxpayer Relief Act of 1997; (13) in addition, while the Amtrak Reform and Accountability Act of 1997 provided Amtrak greater flexibility in its business operations, the reforms provide few financial benefits in the short term; and (14) GAO found this condition continues to exist largely because Amtrak and its unions have not completed negotiations over labor protection arrangements and reforms for contracting out work.
Historically, federal outlays and receipts generally have been reported on a cash basis. That is, receipts are recorded when received and outlays are recorded when paid without regard to the period in which the taxes and fees were assessed or the costs resulting in the outlay were incurred. This has an advantage in that the deficit (or surplus) closely approximates the cash borrowing needs (or cash in excess of immediate needs) of the government. However, over the years analysts and researchers have raised concerns that the current cash- and obligation-based budget does not adequately reflect the cost of some programs—such as federal credit or insurance—in which the government makes a commitment now to incur a cost, but some or most of the cash flows come much later. This means that for some programs the current cash- and obligation-based budget does not recognize the full costs up front when decisions are made or provide policymakers the information to compare the full costs of a proposal with their judgment of its benefits. Programs such as federal employee pensions, retiree health care, and environmental liabilities are examples where the cash basis of accounting does not represent the government’s full commitments. Environmental liabilities are the result of federal operations that create hazardous waste that federal, state, or local laws and/or regulations require the federal government to clean up. Because these cleanup costs are not usually paid until many years after the government has committed to the operation creating the waste, policymakers have not been provided complete cost information when making decisions about undertaking the waste-creating operation. Although all agencies are not yet in compliance, current federal accounting standards require agencies to estimate and report in their financial statements their liability for cleanup costs when they are deemed probable and measurable. Traditionally, budget guidance has required agencies to estimate the funds expected to be obligated for cleanup activities during the budget year in which the funds are needed. However, in recent years OMB also has issued guidance for agencies to estimate life-cycle costs when purchasing capital assets. Among the items to be included in the total amount of these life-cycle costs are decommissioning and disposal costs. The life-cycle cost estimates are reported to OMB in budget Exhibit 300 and do not separately break out cleanup and disposal costs. The exhibits are for OMB’s informational purposes only; they are not included in the President’s budget request or agency’s budget justification provided to Congress. Department of Energy (DOE) and Department of Defense (DOD) officials told us that the cleanup portion of these total costs has traditionally not been separated out or identified at the time of purchase. This is because estimates developed at that time were very preliminary, often based only on a percentage of total costs rather than specific unit costs. To examine ways that budgeting might be improved for environmental liabilities, we focused on three key questions: (1) What are the federal government’s reported environmental liabilities? (2) How are environmental liabilities currently valued for financial statements and budgeted at selected programs within DOD and DOE? and (3) How could budgeting for these environmental liabilities be improved? To determine the federal government’s reported environmental liabilities, we extracted data from agencies’ fiscal year 2001 consolidated balance sheets. Because this analysis showed that about 98 percent of the government’s reported environmental liabilities were associated with DOD and DOE, we focused our review on the practices of these two departments. We reviewed published reports, related guidance, and budget and financial statement documentation from each agency. We also interviewed DOD, DOE, and OMB staff to discuss current budget practices. To develop alternative approaches to improve budgeting for environmental liabilities, we discussed ideas with staff from DOD, DOE, OMB, and CBO. We also met with appropriations subcommittee staff with jurisdiction over DOD and DOE to discuss the type of information that they would find most helpful. We analyzed the pros and cons of the approaches based on the extent to which they would (1) provide meaningful, full-cost information to decision makers up front, (2) provide disincentives for artificially low cost estimates, and (3) present implementation issues, such as additional administrative burdens for agencies or increased complexity to the budget and appropriations process. Finally, to understand how private organizations provide for environmental cleanup, we conducted limited research of private sector budgeting practices. However, little information was available about up-front decision making. Our work was done in Washington, D.C., in accordance with generally accepted government auditing standards. We provided a draft of this report to the Secretary of Defense, the Secretary of Energy, and the Director of OMB. Comments are summarized in the “Agency Comments” section. Nearly all of the $307 billion in environmental liabilities reported for fiscal year 2001 was associated with DOD and DOE. About 78 percent of these liabilities were associated with DOE and represent the environmental legacy resulting from the production of nuclear weapons. The 21 percent associated with DOD is primarily for environmental restoration of military installations and disposal of nuclear materials. The remaining environmental liabilities associated with other federal agencies include such things as replacement of underground storage tanks, asbestos removal, and lead abatement. Some of this remaining 1 percent will be paid out of Treasury’s judgment fund. DOD and DOE manage environmental cleanup quite differently: DOD’s decentralized activities are managed within the individual services, at the program level, while DOE’s activities are centralized within its Environmental Management (EM) program. For example, DOD considers environmental liabilities in two categories: (1) disposal and (2) environmental restoration/cleanup. Army’s chemical weapons and Navy’s nuclear-powered carriers, ships, and submarines dominate DOD’s disposal liabilities. Funding for disposal is provided to the Army, Navy, and Air Force Operation and Maintenance (O&M) accounts. Restoration/cleanup activities are largely addressed through the Defense Environmental Restoration Program (DERP), which is funded through five environmental restoration accounts for Army, Navy, Air Force, Formerly Used Defense Sites (FUDS), and Defense-wide. The funds in these accounts are then transferred to the service levels’ O&M budgets. In contrast, within DOE, facilities that have reached the end of their useful lives and require cleanup typically are transferred to EM, along with some additional funds for surveillance and maintenance. EM also receives budget authority directly through an appropriation. Thus, budgeting and funding for cleanup is almost entirely handled by EM, not individual program offices. EM’s program emphasis is on site closure and project completion. Its activities include environmental restoration, waste management, and nuclear material and facility stabilization. Figures 1 and 2 illustrate the flow of cleanup funds for these two departments. Current budget guidance and accounting standards both require agencies to estimate cleanup and disposal costs. However, neither requires that these costs be separately estimated for decisions when assets are being considered for purchase—before the government is legally committed to paying these costs. While information about private sector decision making on these costs is limited, at least some organizations set aside funds to address these future cleanup and disposal costs. Agencies have little or no budgetary incentive to develop estimates of future cleanup costs. With respect to primary budget data, agencies do not reflect associated cleanup costs in their budget requests for new waste- producing assets. Funding for such cleanup costs is not requested until many years later when the waste produced is ready to be cleaned up or disposed of. Budget guidance does require agencies to estimate cleanup costs as part of total life-cycle costs when requesting funds for new assets. However, agencies are not required to specifically break out the cleanup portion of these costs. DOD and DOE officials told us that separating out the cleanup/disposal component from total life-cycle costs would be relatively difficult because their estimates of cleanup costs are very preliminary. Often, a percentage of the purchase price instead of a specific unit cost is used as the cost estimate. Moreover, they noted that future, unknown changes in regulatory requirements and technology make it difficult to develop what they believe to be reasonable and credible cost estimates at the time an asset is acquired. However, since estimates for retiring assets are being made under today’s regulatory requirements and technology, the same methodology might be used for preliminary estimates with respect to new assets. This would permit comparisons between or across different assets. Over time, as laws and technology change, periodic cleanup cost reestimates could be made. Clear definitions for hazardous substances also may need to be resolved to ensure that reasonable estimates are developed. For example, the Federal Accounting Standards Advisory Board (FASAB) defines hazardous wastes in relatively broad terms (see footnote 1) for accounting purposes. However, the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), which requires the cleanup of waste sites, provides a substantially more detailed definition. While accounting standards promote an earlier recognition of environmental liabilities than does the budget, they do not call for estimates of environmental liabilities before an acquisition decision is made because they recognize these cleanup costs only after a transaction has occurred and an asset is put into service. Given that these conditions are met, agencies must estimate the environmental liabilities associated with all existing assets. Despite this, not all agencies comply with accounting standard requirements to estimate the environmental liabilities associated with all of their assets. For example, DOD typically records the liabilities associated with assets for which cleanup or disposal is imminent. DOD’s inability to comply with requirements for environmental liabilities was one of several reasons why independent auditors were not able to render an opinion about DOD’s fiscal year 2001 financial statements. Absent budgetary incentives to estimate future environmental liabilities, these cost estimates will not be developed as assets are considered for purchase—the time when decision makers still have an opportunity to judge whether the government should commit to these costs. Data about how non-federal organizations consider environmental liabilities when planning to purchase assets or start new projects were largely unavailable. However, there are cases where companies set aside funds for future cleanup costs. For example, the Nuclear Regulatory Commission (NRC) requires private utilities to accumulate the funds necessary to decommission their nuclear power plants and most established sinking funds so that the decommissioning funds are accumulated over the operational life of a nuclear power plant as part of the cost charged to customers for the electricity they use. With the deregulation of electric utilities and the resultant industry restructuring, we recently reported that in most of the requests to transfer licenses to own or operate nuclear power plants approved by NRC, the financial arrangements have either maintained or enhanced the assurance that adequate funds will be available to decommission those plants. For example, projected decommissioning funds were generally prepaid by the selling utility. Also, an Environmental Protection Agency contracted study recommended that a Canadian hydroelectric company establish a liability fund to accumulate funds to finance asset removal, decommissioning, irradiated fuel disposal, and low-to-intermediate radioactive waste disposal. Alternative approaches to promote more complete consideration of the full costs of environmental cleanup and disposal associated with the acquisition of new assets fall along a continuum from provision of supplemental information to accrual of those costs in budget authority up front, as assets are acquired. We explored three approaches along this continuum ranging from the relatively simple one of providing more information but making little other change to current budgeting, to a more complicated one involving significant changes to what is included in primary budget data. The approaches along this continuum represent the degree of certainty that the costs will be considered in decision making. Figure 3 summarizes the three approaches along the continuum. The first approach would be to report long-term environmental liability costs associated with new assets as supplemental information along with the budget authority and outlay amounts requested in the budget. For example, the program and financing schedules within the President’s budget appendix could be expanded to report these associated costs by budget account or program. This would enable those being asked to make a decision to see the full cost information along with currently requested funds. Although the estimates provided in the supplemental information would not be precisely correct, they would clearly be closer to correct than the current implication of no cost. If a running tally of total environmental liabilities is desired, periodic reestimates would be needed. A second approach would move beyond providing supplemental information to establishing budget process mechanisms to require explicit disclosure and prompt consideration of the full costs of the environmental liability associated with a proposed asset acquisition. Thus, Congress could revise its rules to permit a point of order against legislation that does not disclose estimates for environmental liabilities associated with the acquisition of new assets to be funded in the bill. This would have the effect of requiring cleanup cost estimates to be made, either by the executive branch or CBO, so that the estimates could be considered. At the other end of the continuum is the more comprehensive approach of accruing amounts for environmental liabilities associated with new assets in any requested budget authority for new assets. This approach represents the largest departure from current budgeting practices. Along these lines, OMB is developing a legislative proposal to require programs that generate hazardous waste to “pay the accruing cost to clean up contaminated assets at the end of their useful life. These payments would go to funds responsible for the cleanup.” Implementation of an approach that would include budget authority for environmental liabilities would require development of new budgeting mechanisms. The provision to accumulate budget authority over an asset’s life would require a means of “fencing off” the budget authority to ensure that it is actually used for cleanup. Also, since no such amounts were set aside for existing assets, it would be necessary to continue financing the cleanup of existing assets while implementing the new approach for new assets. One way to do this is to use a pair of accounts—a liquidating account and a cleanup fund account—in each department involved in budgeting for the cleanup costs. The liquidating account would obtain discretionary budget authority for the past share of cleanup costs of assets already in operation and for the cleanup costs of retired assets. It would pay the past share of cleanup costs for operating assets to the cleanup fund and would conduct or contract for the cleanup of assets no longer in use at the inception of this new approach. Given technological and other changes, regular reestimates of cleanup costs would be necessary. The cleanup fund account would obtain budget authority from two sources: (1) from the liquidating account for the past share of the cleanup cost for assets that are in operation when the new approach is established and (2) for new assets, from programs that operate assets that generate cleanup needs. The cleanup fund account would receive annual accruing cost payments from programs based on the estimated (and reestimated) cost of cleanup for all operating assets—those purchased after the new approach is implemented and those already in service. These payments would be a required part of the discretionary appropriations for running any program that generates cleanup costs. When needed, the cleanup fund accounts could also request additional budget authority for the assets in operation at its inception. These appropriations could be made to the liquidating account and paid to the cleanup fund account when the assets are ready for cleanup. Once in the cleanup fund account, the budget authority from the programs and liquidating accounts could be permanent, indefinite authority available for cleanup, subject only to the usual apportionment process. Figure 4 below illustrates one possible flow of funds through accounts. Each of the three approaches described offer both potential benefits and challenges to consider. All three would be likely to improve the quality of cleanup estimates. Although agencies are required to develop these estimates for financial statement purposes, they are not developed until after the asset is purchased. Also, not all agencies have completely complied with financial accounting standards. For example, in December 2001, we reported that DOD was not estimating and reporting liabilities associated with a significant portion of property, plant, and equipment that was no longer being used in its operations. Moreover, DOD’s financial statements did not provide cleanup cost information on all of its closed or inactive operations known to result in hazardous wastes. In addition, in 1997 and 1998 we issued a series of reports on DOD environmental liabilities that were not being reported, even though they could be reasonably estimated. Each of the three approaches would result in decision makers having information about costs and benefits of a proposed acquisition while there is still the opportunity to make a choice—before the government actually incurs an environmental liability. Since the cleanup costs for any asset will become a future claim on federal resources regardless of whether these costs were considered at the outset, good budgeting principles call for up- front consideration of these costs. Given that agencies are not currently experienced in separately estimating cleanup/disposal costs before assets are purchased, reasonable and credible estimates may take time to develop. This, however, is not an insurmountable issue. We have reported on numerous occasions that environmental liabilities can be estimated and have pointed out how estimation methodologies can be improved. For example, in December 2001 we recommended that, among other things, DOD correct real property records, develop and implement standard methodologies for estimating related cleanup costs, and systematically accumulate and maintain the site inventory and cost information needed to report this liability. Of the three approaches described, the supplemental information and the budget process mechanism approaches would be easiest to implement and could be done separately or together. Neither requires the enactment of budget authority and so would not increase reported budget totals. Supplemental reporting requirements would be the easiest to implement since OMB could require it under OMB’s current authority. However, unless agencies see that the new supplemental information is used in decision making, they may have less incentive to develop meaningful estimates. The budget process mechanism approach would increase the perceived importance of these estimates by permitting a point of order that could block legislation lacking appropriate cost information. For example, unfunded mandates legislation permits a point of order to be raised against proposed legislation containing significant intergovernmental mandates if a CBO estimate of the cost of the mandate has not been published in the committee report or the Congressional Record. Unlike supplemental reporting alone, the budget mechanism approach has the potential to promote improved estimates because it could present members an opportunity to challenge legislation without appropriate cost information. Implementing a budget process approach with a point of order would require an amendment either to the Congressional Budget Act of 1974 or a change to committee rules. The third approach, accruing budget authority over the life of the asset, represents the largest departure from current budgeting practices. By requiring that agencies obtain budget authority before acquiring new assets, this approach would ensure consideration of environmental cleanup costs before an asset is acquired. Such an approach would require legislation. If Congress and the Administration agree to take such action, it would ensure that each program’s costs are fully reflected in program budgets. Requiring that agencies accrue budget authority for cleanup costs would likely increase the attention paid to improving the quality of estimates. All in all, given the current quality of agency estimates and significant implementation issues, such an approach may best be viewed as something to be considered in the future. Beyond the issue of developing reasonable and credible estimates early on, this third approach also would present administrative and structural challenges such as developing mechanisms to ensure that (1) budget authority provided for cleanup is adequately fenced off for cleanup, (2) agencies adequately track and manage that budget authority, and 3) reestimates provide positive incentives to reflect the best approximation of the government’s total environmental liabilities. When demand for current funding is great, fencing off budget authority for future use can be a challenge. One way to address this would be to have payments into the cleanup fund come from discretionary appropriations, but once in the fund, the budget authority would become permanent, subject only to the usual apportionment process. Providing higher levels of budget authority now for expenses that may not be paid until well into the future may be difficult. It is important to note that this approach would not in fact change the costs of future cleanups—in effect these have already been determined by the decision to acquire the asset. Rather, this would only shift the timing of their recognition. Ensuring that agencies adequately track and manage the earmarked budget authority would be a second challenge to successful implementation of this approach. For example, there is more than one way to manage the budget authority needed to clean up assets already in operation at the inception of the new approach. One way would be to transfer budget authority from a liquidating account to a cleanup fund for such assets when they are ready to be cleaned up. Alternatively, the full amount of budget authority for the past share of the cleanup cost could be enacted in one lump sum for the cleanup fund. This would simplify implementation since it would apply the new accrual concept fully to all assets in operation. Since this could be a considered a concept change, any discretionary caps on budget authority (if renewed) would be adjusted upward to accommodate the additional budget authority—but it would still increase reported budget authority totals. Some believe that covering all of the costs immediately would be a cleaner, more consistent application of full costing since it would eliminate a lengthy and possibly confusing transition period. However, such a decision to provide budget authority for retired assets could shift the control over the timing of the cleanup from Congress to the Administration. Finally, a way to budget for inevitable reestimates of cleanup costs would have to be designed. If agencies must obtain additional budget authority for these reestimates, they will have less incentive to make artificially low initial estimates but may be reluctant to provide upward reestimates. On the other hand, one could envision agencies forwarding a low estimate “today” with the idea that they could worry about “tomorrow” later. Alternatively, reestimates could be handled as they are with credit programs, that is, agencies could automatically receive permanent, indefinite budget authority for upward reestimates of cleanup costs. This would hold agencies harmless for additional costs that result from technological or regulatory changes. It would also, however, provide an incentive to make artificially low initial estimates. Because the federal budget does not recognize the full costs of a program that will have cleanup costs when decisions to commit to the program are being made, policymakers do not have sufficient information to compare the full costs of a particular program with their judgment of its benefits. Cleanup costs are in fact a liability associated with the ownership of many assets. Decision makers need to consider these costs before committing to acquire the waste-producing asset. Agencies generally do not yet have experience in estimating future cleanup/disposal costs up front, before the decision to purchase the waste- producing asset is made. Accordingly, all of the alternative approaches we discuss for providing this information represent a challenge for both agencies and OMB to develop an estimation methodology. Increasing the visibility of cost estimates may increase the effort spent on them and ultimately improve both the quality of the estimates and enhance decision making. As a first step, we believe that OMB and agencies should provide supplemental information. This can be expected to improve the focus and attention and permit improvements in estimating models. As this proceeds, further consideration should be given to budget process and budget accounting changes. Ultimately, accruing budget authority for the tail-end cleanup/disposal costs along with the front-end purchase costs of assets would best ensure that the cleanup/disposal costs are considered before the government incurs the liability, but raises significant implementation challenges. We recommend that the Director of OMB require supplemental reporting in the budget to disclose future environmental cleanup/disposal costs for new acquisitions. To this end, agency and OMB officials should consult with legislative branch officials to ensure that useful information on estimated environmental cleanup/disposal costs is provided to congressional decision makers when requesting appropriations to acquire waste-producing assets. The Secretary of Defense had no comments on our draft report. We did not receive comments from the Secretary of Energy in time to be considered and included in this report. In consultation with OMB staff, GAO was commended for its useful analysis and noted that the ideas discussed merit consideration. OMB staff also provided technical clarifications, which we incorporated as appropriate. As agreed with your office, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time we will send copies to the Ranking Minority Member of the House Committee on the Budget and the chairmen and ranking minority members of the Senate Committee on the Budget; the subcommittees on Defense and on Energy and Water Development, Senate Committee on Appropriations; and the subcommittees on Defense and on Energy and Water Development, House Committee on Appropriations. We are also sending copies to the Director, Office of Management and Budget. In addition, we are sending copies to the Secretary of Defense and of Energy. Copies will also be made available to others upon request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. This report was prepared under the direction of Christine Bonham, Assistant Director, Strategic Issues, who may be reached at (202) 512-9576. Other major contributors were Carol Henn and Brady Goldsmith. Please contact me at (202) 512-9142 if you or your staff have any questions concerning the report. About a dozen federal agencies report environmental liabilities in their financial statements. This appendix provides additional detail on the environmental liabilities reported by the Department of Energy (DOE) and the Department of Defense (DOD) and about those reported by the other federal agencies. These data were extracted from agencies’ fiscal year 2001 consolidated balance sheets and represent existing assets—not proposed acquisitions. Because DOD and the National Aeronautics and Space Administration auditors disclaimed an opinion on their financial statements, it is not certain that these amounts fairly present their liabilities.
Although environmental liabilities resulting from federal programs and activities represent the third largest category of the federal government's liabilities, the current cash- and obligation-based budget does not provide information on estimated cleanup costs before waste-producing assets are purchased. As a result, policymakers do not have the opportunity to weigh the full costs of a proposal with their judgment of its benefits. The Chairman of the House Committee on the Budget asked GAO to examine and report on various ways budgeting might be improved for environmental cleanup costs, including some of the benefits, limitations, and challenges associated with each. The federal government is legally required to clean up hazardous wastes that result from its operations. Agencies are currently required to report these environmental liabilities in their financial statements, but these estimates are not recognized until after a waste-producing asset is placed into service. Although agencies are supposed to consider cleanup and disposal costs associated with these assets as part of the acquisition process, they typically do not request the related budget authority until many years after the government has committed to the operation creating the waste, when cleanup is imminent. Alternative approaches to promote up-front consideration of the full costs of environmental cleanup and disposal for assets being proposed for purchase fall along a continuum ranging from supplemental information to enactment of additional budget authority. While each approach has potential benefits and challenges, agencies' lack of experience in estimating future cleanup/disposal costs up front suggest starting at the more modest end of the continuum--providing supplemental information to decision makers. Eventually, however, accruing budget authority for the tail-end cleanup/disposal cost along with the front-end purchase cost estimates would do the most to ensure that these costs are considered before the government incurs the liability.
Contamination from the Hanford site that may threaten the Columbia River includes (1) contamination that resulted from disposal activities during the era in which DOE produced nuclear material; (2) contamination that could occur during cleanup activities, such as from an accidental spill; and (3) possible future migration of contamination from waste that will be permanently disposed of on the Hanford site in accordance with the cleanup actions DOE and the regulators plan to use. Contamination from production era. Contamination at Hanford resulting from plutonium production (which occurred from 1943 to 1989) that is currently migrating to the river is primarily from: Intentional disposal of liquid waste and contaminated water into the ground (about 450 billion gallons). Leaks into the soil from waste tanks and the pipelines that connect them (between 500,000 to 1 million gallons containing about 1,000,000 curies of radioactivity). Contamination that has begun to migrate from solid waste (more than 710,000 cubic meters) disposed of on-site in burial grounds, pits, and other facilities. Chemical and radioactive contamination currently affects more than 180 of the 586 square miles of the site’s groundwater and large areas of the vadose zone. While there are numerous contaminants now in the vadose zone and the groundwater below, DOE believes the key contaminants in the groundwater include hazardous chemicals (such as carbon tetrachloride, chromium, nitrate, and trichloroethane) and radioactive materials (such as iodine-129, strontium-90, technetium-99, tritium, and uranium). These contaminants are of concern because of their extent, their mobility in the groundwater, and the potential health risks associated with them—at sufficient levels, some of these contaminants are toxic to humans or fish, while others are potential carcinogens. Potential contamination from current activities. Current cleanup efforts at the Hanford site could contribute to contamination of the vadose zone and groundwater that eventually reaches the river. For example, some of the waste put into underground storage tanks as liquid has since turned into sludge or saltcake. To dissolve it, more water will have to be introduced into the tanks—including tanks known to have leaked. This process may cause additional discharges into the soil. Possible future contamination. Under DOE’s cleanup plans, and with regulator approval, a large amount of contaminants will remain on-site long into the future. This contamination may be in buildings, in mostly empty underground tanks, in covered burial grounds and waste disposal areas, and in approved disposal facilities. Contaminants may leach out of these facilities in the future and join existing contamination in the vadose zone and migrate to the groundwater, where they could migrate to the river. Based on groundwater sampling results, DOE reports that plumes of contamination continue to move through the vadose zone and the groundwater, and are leaching into the river. DOE estimates that about 80 square miles of groundwater under the site contains contaminants at, or above, federal drinking water standards. Because the groundwater and the river are at the same relative elevation, these plumes are leaching directly into about 10 of the nearly 50 miles of river shore on the site. DOE’s Office of Groundwater and Soil Remediation under the Assistant Secretary for Environmental Management sets overall policy and oversight for groundwater and soil remediation. At the Hanford site, both the Richland Operations Office and the Office of River Protection, as well as several contractors, are involved in groundwater and vadose zone activities. The monitoring of river and shoreline conditions, and groundwater sampling, is managed by the Pacific Northwest National Laboratory (PNNL). Analysis of the samples is performed by several approved laboratories. Funding for groundwater and vadose zone activities at the site is difficult to identify due to the large number of organizations and activities involved and the structure of DOE’s budget accounts. However, monitoring, characterization, well drilling and maintenance, remediation, and research activities received nearly $175 million in fiscal year 2006. DOE is taking steps to better understand the risk to the Columbia River from Hanford site contamination and to replace ineffective cleanup technologies. Specifically, DOE is addressing problems with three main aspects of its Columbia River protection efforts. First, DOE and its regulators have agreed that additional investigation of contamination in the vadose zone is needed, although doing so could delay by about 3 years the date by which DOE will propose its cleanup plans to the regulators. Second, DOE is reworking its approach to modeling the future effects of contamination on river conditions. DOE abandoned past modeling efforts in response to criticism that the models used inconsistent assumptions, were based on data of questionable reliability, and had weak quality control processes. Third, in response to concerns about the effectiveness of some of the technologies DOE had deployed to remove or contain contamination near the river, and with specific direction from Congress, DOE is evaluating alternative technologies that may be more effective at addressing the contamination. While DOE has extensive knowledge of the contaminants in the river and groundwater, and the movement of contaminants in the groundwater and on or near the surface, DOE has only recently developed limited information about the extent and location of the contamination that has migrated from the surface areas into the vadose zone above the groundwater. Understanding the nature of vadose zone contamination is critical to determining the most appropriate steps to take to protect the river now, and in future years, because contaminants still in the soil may continue to migrate until they eventually reach the groundwater and the river. DOE has studied some portions of the vadose zone, such as around the underground storage tanks, where extensive contamination from leaks and spills occurred in the past. In doing so, DOE found that some contamination, including technetium-99, had migrated as far as the groundwater. DOE contractors were able to map the migration of some of these contaminants. However, DOE acknowledges that its understanding of contaminants in the vadose zone is limited in many areas of the site. For example, cribs and trenches near the underground tanks received large volumes of contaminated wastes that dispersed directly to the ground. DOE has little information on the extent and location of the contamination in those areas, according to DOE officials responsible for planning their cleanup. They also said that characterization of the lower portions of the vadose zone is difficult and expensive, and few remediation techniques have been developed or tested for removing or isolating wastes that are located deep in the vadose zone. Understanding the extent of vadose zone contamination is critical because some contaminants still in the soil may continue to migrate until they eventually reach the groundwater and the river. Thus, understanding the type and volume of contaminants in the vadose zone and their rate of migration is essential to determining the most appropriate steps to take to protect the river now, and in future years. After finding unexpected contaminant migration in the vadose zone at one waste disposal area known as BC cribs—a location where liquids were discharged directly into the ground—DOE agreed with its regulators that its understanding of the vadose zone was inadequate to support the development of a final cleanup remedy for that area and some others. Although DOE had originally planned to defer some of its study of the vadose zone until after December 2008, when draft cleanup plans were due, DOE now agrees that more sampling and analysis of the vadose zone is needed to guide cleanup decisions. As a result, DOE has proposed to regulators to extend the date for submitting draft cleanup plans until 2011. DOE officials said this will allow the time needed to develop a better understanding of vadose zone conditions and to investigate potential remedies. In response to the discovery that its previous models to estimate the future risks of the movement of contamination toward the river were based on data of questionable reliability, DOE has begun reworking these efforts. While DOE relies on sampling to determine current conditions, it uses computer simulation models to predict future conditions and estimate future risks. In 1998, DOE groundwater program officials said DOE concluded from its simulation models that the migration was slow enough that the contaminants included in the study would not exceed their limits for 1,000 years into the future. However, DOE was concerned about the completeness of the model and began an effort, known as the System Assessment Capability, to develop a more comprehensive model. This $16 million, 8-year effort was cancelled when, in the course of a lawsuit over Hanford’s disposal plans, several quality assurance problems were found, including discrepancies in the data. DOE abandoned the past modeling efforts in response to criticisms that the models used inconsistent assumptions, were based on data of questionable reliability, and had weak quality control processes. In January 2006, DOE and Washington State settled the lawsuit. In the settlement agreement, DOE agreed to re-analyze and update its study of the cleanup’s effect on groundwater. In addition, DOE agreed to consolidate two studies of the cleanup’s effects on groundwater into a single, integrated study. Both DOE and its regulators have determined that the results of all three of DOE’s approaches to treating groundwater—pump-and-treat, chemical treatment, and natural attenuation—are not fully satisfactory. Specifically: Pump-and-treat. In a 2004 report, the DOE Inspector General concluded that the pump-and-treat system to remove strontium-90 was ineffective and that the other four pump-and-treat systems have had mixed results. However, Hanford’s acting groundwater project manager told us that four of the five pump-and-treat systems at the Hanford site meet the remedial objectives agreed to with Hanford’s regulators. The official acknowledged that the system to remove strontium-90 was largely ineffective and that DOE had been trying to obtain permission from the regulators to turn it off. Both DOE and the regulators told us that the regulators refused to allow the system to be turned off, however, until a more effective remedy was found. In March 2006, after spending about $16 million since 1996 to install and operate the system, DOE turned the system off with the regulators’ permission, and began testing a chemical barrier to prevent the strontium-90 from entering the river. Chemical treatment. In 2004, DOE reported that, based on groundwater samples, the chemical barrier for chromium was not fully effective, and that the hazardous form of chromium was detected beyond the barrier and close to the river. DOE is currently testing alternative approaches to improve the barrier. Natural attenuation. According to monitoring well data, DOE’s reliance on natural attenuation to dissipate a uranium plume near the city of Richland was ineffective and has not controlled the migration of uranium to the river. The plume has not dissipated in the 10-year period since the natural attenuation strategy was adopted. DOE is currently investigating the plume, testing chemical barriers, and exploring other ways to mitigate the problem. In the conference report accompanying the fiscal year 2006 Energy and Water Development Appropriations Act, the conferees directed DOE to make $10 million available to analyze and identify new technologies to address contaminant migration to the Columbia River. DOE convened a study group to identify potential technologies and determine how best to allocate the funds to support them. According to DOE’s groundwater project manager, if the technologies tested are successful, DOE will seek funds to expand the systems to fully address these problems. DOE is testing the following: To address problems with pump-and-treat systems, DOE is testing new approaches to containing strontium-90 and chromium. To contain the strontium, DOE is testing two techniques: (1) using a chemical to bind the strontium to the soil until it decays, which would prevent it from leaching into the river; and (2) planting willow bushes near shore to capture the strontium in the plants, which can be harvested to dispose of the strontium. For chromium removal, DOE has adopted a “systems approach” which includes combining source removal, pump-and-treat system expansion, and barrier repairs according to DOE’s groundwater project manager. DOE is also planning to test an improvement to the pump-and- treat system. The test system will use an electric field to remove the chromium from the groundwater extracted by several of the existing wells. If it succeeds, DOE’s project manager said, they will expand the pump- and-treat system to include this technology. To address problems with the chromium barrier near the river, DOE plans to inject chemicals through the wells used to create the barrier to help convert the chromium to a less toxic and less mobile form. To address problems with using natural attenuation to dissipate the uranium plume near the city of Richland, DOE is testing whether injecting a chemical called polyphosphate can help prevent the uranium from migrating to the river. In addition to these activities, DOE plans to research methods to better understand the existing carbon tetrachloride plume in the center of the site. DOE has begun to address management problems with its Columbia River protection efforts at the Hanford site by proposing management improvements to better oversee and coordinate its groundwater and vadose zone activities. Although those steps are important and needed, we are concerned about DOE’s ability to sustain any improvements made. Similar efforts in the past failed. In our previous work, we reported that leading organizations use a systematic, results-oriented plan to sustain management improvement initiatives. Such a plan incorporates key elements, such as clear goals, performance measures to gauge progress toward those goals, and an evaluation strategy to help ensure the initiative is effective. Although DOE is beginning to develop a plan for its new integration initiative, it has yet to implement key elements, such as performance measures or an evaluation strategy. These tools could help measure effectiveness and sustain the benefits of the initiative over time. DOE is beginning to address longstanding concerns about the management and oversight of its Columbia River protection efforts at the Hanford site. In November 2005, we reported that DOE’s river protection efforts continued to be fragmented among two DOE site operations offices and several site contractors. We raised concerns that the potential existed for duplication, gaps, and inefficiencies. Subsequently, in the November 2005 conference report accompanying the Fiscal Year 2006 Energy and Water Development Appropriations Act, the conference committee cited these continuing management and organization problems and directed DOE to study how to better integrate its river protection efforts. In response to the congressional direction, in March 2006, DOE’s Assistant Secretary for Environmental Management developed a new plan to better integrate Hanford’s river protection, vadose zone, and groundwater efforts. Specifically, DOE’s new integration initiative would: Consolidate most groundwater and vadose zone characterization and cleanup activities under a single project. At the time of the congressional direction, two DOE offices and three main contractors on- site were collectively responsible for characterizing and cleaning up vadose zone and groundwater contamination. The Office of River Protection and its contractor, CH2M Hill Hanford Group, were responsible for characterizing and addressing contamination of the vadose zone in tank farms—areas where tanks containing radioactive liquid waste are buried. The Richland Operations Office and its contractors, Fluor Hanford and Washington Closure Hanford, were responsible for vadose zone characterization in the central plateau area of the site and along the river corridor, respectively. In addition, Fluor Hanford was responsible for groundwater activities in all areas of the site. Within Fluor Hanford, responsibility for cleanup of the groundwater and vadose zone was divided between two different projects with the project handling vadose zone issues also responsible for addressing removal of old buildings and burial grounds. To better coordinate vadose zone and groundwater characterization and cleanup activities, DOE’s new integration initiative proposed consolidating most of this work under a single project managed and coordinated by Fluor Hanford. To do so, DOE planned to modify existing contracts with the affected contractors to reflect this reorganization. In June 2006, the Office of River Protection and the Richland Operations Office issued a Plan of Action for Hanford Groundwater and Vadose Zone Integration Improvements. It identified general activities and areas of responsibility that the Fluor Hanford and CH2M Hill Hanford Group contractors would be responsible for under the new initiative. As of the end of July 2006, DOE was negotiating the details of this reorganization of responsibilities with the contractors and anticipated having the contracts modified to reflect the changes by October 1, 2006. Better integrate vadose zone, groundwater, and waste disposal site cleanup decisions. DOE acknowledged that decisions about when and how to address vadose zone and groundwater contamination were not always well coordinated, and they generally were not coordinated with decisions about when and how to address the source contamination in a waste disposal site located above the vadose zone and groundwater. For example, initial plans for cleanup decisions of the surface areas in the Central Plateau were not necessarily linked to the plans for the underlying groundwater units, according to DOE’s groundwater project manager. To better integrate vadose zone, groundwater, and waste disposal site cleanup decisions, DOE proposed to implement a new strategy by the end of fiscal year 2006 and to work with regulators to better align regulatory milestone dates for making cleanup decisions about waste sites, the vadose zone, and the groundwater. DOE’s new strategy includes plans to transfer most vadose zone characterization activities into the groundwater program. Consolidate responsibility for modeling the movement of contaminants through the vadose zone and groundwater to estimate the potential current and future health risks. DOE has acknowledged that inconsistencies and reliability problems existed in the modeling of how contaminants move through the vadose zone and groundwater, and how the environmental risks associated with those contaminants were estimated. A DOE team reviewing the data quality issues and the modeling effort found that, in addition to issues of the reliability of data used in the models, various modeling efforts under way were based on different assumptions, and information about contamination movement was not always correctly transferred to other models. To address these problems, DOE proposed to more closely coordinate modeling and risk assessment activities at the site and strengthen control over model design so that a common set of databases and assumptions were being used for decision making. The groundwater project would have configuration control over any models used so that any changes to databases and models assumptions would require approval by the groundwater project before users could implement them. In addition to these management improvement efforts at the Hanford site, in May 2006, DOE also established a new Office of Groundwater and Soil Remediation to improve headquarters’ oversight on issues dealing with soil and groundwater contamination across the DOE complex. The office is tasked with reviewing all soil and groundwater remedies at DOE sites, helping to develop technologies to solve groundwater and soil contamination problems at different DOE sites, and generally overseeing DOE policy and assessments regarding vadose zone and groundwater cleanup. Given past problems fully implementing and sustaining improvements to the management of DOE’s Columbia River protection efforts at the Hanford site, it is uncertain whether any improvements that result from DOE’s new integration initiative will be sustained. In 1998, we reported that DOE lacked a comprehensive and integrated groundwater and vadose zone program, and recommended that DOE implement an integrated strategy that defined measurable performance goals, clearly defined leadership roles, and established accountability for meeting those goals. In response to our 1998 report, DOE proposed an integrated management plan to coordinate groundwater and vadose zone work. To accomplish this, DOE assigned a single DOE Assistant Manager in the Richland Operations Office to coordinate all groundwater and vadose zone work at the Hanford site. Because DOE’s other site office, the Office of River Protection, and several contractors at the site also carried out groundwater and vadose zone cleanup, DOE made the Assistant Manager responsible for ensuring that all groundwater and vadose zone activities were integrated into a single planning effort. This “Integration Project” included developing a sitewide approach to project planning, funding, and information management, and co-locating contractor staff working on the project to improve coordination. In addition, the project included improving coordination of efforts to develop science and technology to address contamination in the vadose zone and groundwater. Despite these proposed changes, DOE was unable to effectively implement the improvements it planned to make. For example, according to a site official at Hanford who oversaw the initial integration effort, DOE did not implement key elements of the plan, such as establishing a sitewide funding profile for all groundwater and vadose zone activities. DOE implemented other elements of the plan but did not sustain them when changes, such as how projects were organized and contracts were structured, occurred at the site. For example, coordinating all activities through a single federal project manager faltered as site offices were reorganized and responsibilities were distributed among three federal project directors. The DOE official from the Hanford groundwater program attributes the lack of coordination of groundwater and vadose zone efforts to redefining project activities, which resulted in groundwater and vadose zone activities being managed as separate projects and changes in the structure of site contracts, which resulted in scopes of work being organized and assigned differently. A 2001 National Academy of Sciences review of DOE’s groundwater science and technology activities noted that DOE’s integration efforts had been superimposed over several already existing cleanup projects without establishing a clear line of responsibility for results. The National Academy said that this left the program operating in an unstable environment. To increase the chances of success for DOE’s current improvement initiative, we assessed DOE’s management of its new integration initiative against model practices used by organizations that successfully sustained improvement initiatives. We previously reported that in high-performing organizations, management improvement initiatives are sustained by using a systematic, results-oriented plan that incorporates a rigorous measurement of progress. Such a plan typically included the following steps: (1) defining clear program goals for the initiative—important because it focuses an organization’s efforts on achieving specific outcomes and allows as assessment of future performance against those goals; (2) developing an implementation strategy that sets milestones and establishes individual responsibilities—important because it establishes accountability for achieving the initiative’s goals; (3) establishing results- oriented performance measures—important because it allows organizations to measure progress toward achieving their goals; and (4) using results-oriented data to evaluate the effectiveness of the initiative and make additional changes where warranted—important because periodic evaluations can reveal systemic problems and promote continuous program improvement over the long term. As of July 2006, DOE had implemented two components and not implemented other management components to help ensure that it could sustain any improvements resulting from its new integration initiative. For example, in putting forward its plan to Congress, DOE described a general goal of its new integration initiative as better coordination of Hanford’s groundwater and vadose zone cleanup activities in order to achieve greater protection of the Columbia River. DOE also outlined steps it would take toward its goal, such as (1) consolidating site modeling and risk assessments; (2) consolidating river protection efforts under a single project; and (3) integrating soil and groundwater cleanup decisions. In going forward, DOE could further refine its goals to include measurable steps to achieving its overall goal of protecting the river. For example, a more measurable goal would be the reduction of contamination reaching the river or ensuring duplication of efforts is reduced in order to better protect the Columbia River. DOE had established general milestones and individual responsibilities for implementing its new integration initiative. For example, DOE’s plan of action sets 16 milestones by September 2006 by which various initial steps are to be taken. DOE also reported that five of these actions, including making staff assignments and establishing an integrated project team, had been completed. DOE has not established results-oriented measures to gauge the progress of its integrated management initiative. In outlining the steps it will take under its plan, DOE has generally concentrated on establishing relationships and moving work-scope between various DOE offices and contractors, and not on outcomes, such as reducing redundancies or gaps in river protection efforts. Without clear results-oriented performance measures to gauge progress, problems that occur under a fragmented management structure could be masked and allowed to continue under DOE’s integration plan. Translating the general goal of “better integration” and “protection of the river” into a more specific goal, such as reducing duplicative efforts, would help DOE identify ways it could measure results and, therefore, gauge progress toward the goals of its integration initiative. Finally, DOE has not yet identified an evaluation strategy to determine whether the steps it is taking are effective and are being sustained. Without an evaluation strategy based on clear goals and results-oriented measures, DOE will not have the results-oriented data necessary to objectively evaluate progress and implement corrective actions as needed. Although DOE is still working to define and implement its integration initiative, fully developing and putting in place key elements outlined above could help ensure that any program improvements are sustained in the future. DOE’s Hanford Assistant Manager in charge of overseeing the latest management improvements for the river protection program said that, beyond outlining broad goals and setting the framework for roles and responsibilities, DOE had not yet fully developed a project execution plan for the new initiative. He said that the management plan is still evolving and that future steps may include more clearly defining performance measures and strategies for evaluating the initiative’s effectiveness. DOE is involved in a lengthy process to identify and address potential threats to the Columbia River from contamination in the soil and groundwater at the Hanford site. This requires a good understanding of the risks to the river and an effective management strategy for addressing those risks. Over the years, we and others have raised concerns about DOE’s efforts to understand the nature and extent of the contamination and how best to manage the efforts to prevent contamination from seeping into the river. In recent months, DOE has taken several steps to gain a better understanding of the risks from the contamination as well as to improve its management of the program and integration of activities. While these steps are encouraging, DOE has not yet decided whether to put in place elements of a management plan that could help ensure potential benefits of these improvements will be continued, even when organizational and contract changes occur at the site. Such a management plan should include developing results-oriented performance measures, using the measures to determine progress toward objectives, and making changes as necessary. To increase the likelihood that DOE will effectively implement and sustain improvements in its program to protect the Columbia River from contamination at the Hanford site, we recommend that the Secretary of Energy strengthen the management improvement plan by establishing results-oriented performance measures and regular evaluations to gauge the program’s effectiveness. We provided a draft of this report to DOE for its review and comment. In a letter from DOE’s Principal Deputy Assistant Secretary for Environmental Management, DOE agreed with the report’s findings and fully endorsed the recommendation to adopt results-oriented performance measures and regular evaluations of the river protection program. DOE acknowledged that performance measures and regular evaluations are a fundamental and integral component of sound project management practice and said that it would incorporate them into the project. The full text of DOE’s comments is presented in appendix II. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this report. At that time, we will send copies of this report to other interested congressional committees and to the Secretary of Energy. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions on this report, please contact me at (202) 512-3841 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other staff contributing to this report are listed in appendix III. To understand the risk to the Columbia River from Hanford site contamination, we reviewed risk assessments, groundwater, vadose zone, and river monitoring reports by the Department of Energy (DOE), DOE’s Office of Inspector General, DOE contractors including the Pacific Northwest National Laboratory, and various outside groups such as the National Academy of Sciences. We interviewed DOE officials at both headquarters and the Hanford site, as well as contractor staff at Hanford, to obtain information on the distribution of contamination at Hanford and the steps being taken to better understand it. To understand DOE’s approach to the vadose zone, we primarily reviewed our 1998 report, as well as documents prepared by DOE and its staff in response to that report. We also reviewed documents DOE submitted to regulators related to changing Tri-Party Agreement milestones; the documents were to be used for preparing initial drafts of plans for all remaining contaminated areas. We discussed the proposed change to the December 2008 Tri-Party Agreement milestone with DOE officials and regulators. In reviewing DOE’s efforts to determine the extent of risk of future damage to the river from contamination, we reviewed documents related to DOE’s sitewide modeling effort and legal documents related to this modeling effort. We discussed these modeling efforts with DOE officials, contractors, and regulators. In assessing DOE’s efforts to deploy effective technologies to address contamination near the river, we visited the sites of existing and planned cleanup efforts. We discussed current existing projects with DOE officials, contractor staff, regulators, and stakeholders, and reviewed reports prepared for DOE and others. To assess technology plans developed by DOE to use $10 million of funds earmarked for fiscal year 2006, we attended DOE screening panels, reviewed reports prepared by DOE and others, and discussed the efforts with DOE regulators. To review DOE efforts to strengthen the management of its river protection efforts, we reviewed DOE’s past and current management plans. We obtained DOE’s recent integration initiative proposals, including its proposal to Congress in March 2006 and its subsequent Memorandum of Agreement and Plan of Action. We discussed DOE’s approach with headquarters and site officials. We reviewed previous work in which we documented strategies used by high-performing organizations to implement improvement initiatives. We reviewed DOE’s proposed integration initiative and compared it to key elements of these strategies. We also discussed DOE’s plans to implement its strategy with knowledgeable site officials. In reviewing the management of DOE programs related to groundwater and river protection, we reviewed DOE efforts to assure that contamination levels were accurately reported; we also interviewed regulators, DOE officials, and contractors regarding data reliability. While we did not independently test the contaminant data, we reviewed controls over how the data were obtained and tested, visited sampling locations and discussed sampling methods with key staff, and reviewed other relevant information to determine that the data were sufficiently reliable for the purposes of our report. We conducted our work from December 2005 to August 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Bill Swick, Assistant Director; Chris Abraham; Doreen Feldman; Nancy Kintner-Meyer; Jeffrey Larson; Omari Norman; Alison O’Neill; Thomas Perry; and Stan Stenersen made significant contributions to this report. Others who made important contributions included Mark Braza, Doreen Eng, and Mehrzad Nadji.
The Department of Energy's (DOE) Hanford site in Washington State is one of the most contaminated nuclear waste sites in North America. The Columbia River flows through about 50 miles of the site. Radioactive and hazardous contamination from decades of producing nuclear materials for the nation's defense have migrated through the soil into the groundwater, which generally flows toward the river. In November 2005, GAO reported on the potential for the Hanford site to contaminate the Columbia River. To address continuing concerns, GAO reviewed the status of DOE's efforts to (1) understand the risk to the Columbia River from Hanford site contamination and to deploy effective technologies to address contamination near the river and (2) strengthen the management of its river protection program. To assess DOE's efforts, GAO reviewed numerous reports by DOE and others, and discussed the problem with federal and state regulators and DOE officials. DOE is actively assessing the risk to the Columbia River from Hanford site contamination and is addressing problems with deployed river protection technologies. While DOE has extensive knowledge of contaminants that are currently in the groundwater and river, DOE knows less about contamination in the soil below the surface, known as the "vadose zone." Before proposing a cleanup approach, DOE has agreed with its regulators to take vadose zone samples in many of the contaminated areas of the site. DOE is also improving its computer simulation model that will predict future risk from the contamination, and deploying alternative technologies it believes will more effectively contain the contamination that may threaten the river. DOE has also begun to address concerns about its management of Columbia River protection efforts, particularly the lack of integration between groundwater and vadose zone activities. In March 2006, in response to congressional committee direction, DOE proposed a new initiative to better integrate its river protection activities. The initiative included consolidating most groundwater and vadose zone characterization work under a single project; better integrating vadose zone, groundwater, and surface cleanup decisions; and improving the coordination and control over computer models used to predict movement of contamination in future years. Initiating these management improvements is important, but it is equally important that they be implemented effectively, and past history gives some cause for concern. For example, one attempt by DOE to better integrate these activities was unsuccessful when key elements, such as putting all activities under a single project manager, failed to continue after project and other changes occurred at the site. In past GAO work, we reported that high-performing organizations sustained improvement initiatives when key elements were in place, such as clear goals, results-oriented performance measures, and evaluation strategies. Although DOE is beginning to develop a management plan for its new initiative, DOE has yet to implement some key elements, such as results-oriented performance measures and evaluations to gauge the effectiveness of its improvements, which could also help sustain the benefits of the improvements over time.
State and local laws and requirements continue to guide districts and schools when planning for and managing emergencies. The federal government’s role in school emergency management has been to support state and local activities, by providing guidance, training, equipment, and funding to help districts and schools respond to emergencies effectively. DHS is responsible for most federal emergency management programs, including some that allow funds to be used for school emergency preparedness. In fiscal year 2015, DHS awarded $989 million to states, urban areas, and territories to prepare for and respond to terrorist attacks and other disasters. Since our 2007 report, the federal government has taken additional steps to help districts and schools plan for and manage emergencies. In March 2011, the White House issued Presidential Policy Directive 8, aimed at strengthening the security and resilience of the United States through systematic preparation for the threats that pose the greatest risk to the country’s security. It also directed DHS to develop, in coordination with other federal agencies, a national preparedness goal that identifies the core capabilities necessary for preparedness and a national preparedness system to guide activities to achieve that goal. In response to the directive, DHS released in September 2011 the National Preparedness Goal that identified capabilities to prevent, protect against, mitigate, respond to, and recover from threats and hazards. The Goal defined success around these five mission areas, which occur before, during, or after an incident (see fig. 1). The Goal recognized that preparedness is a shared responsibility of the whole community, which FEMA, as a component of DHS, notes includes schools, among others. As stated in the Goal, threats and hazards may include acts of terrorism, cyberattacks, pandemics, and catastrophic natural disasters. In January 2013, following the shootings at Sandy Hook Elementary School, the White House developed a plan, Now is the Time, to protect children and communities from gun violence which called, in part, for all schools to have comprehensive emergency operations plans (see fig. 2). This plan also directed Education, DHS, HHS, and Justice to release a set of model, high-quality emergency operations plans for schools by May 2013. Education, DHS, HHS, and Justice each provide assistance to school districts and schools with preparing for emergencies. In response to the President’s call for model, high-quality emergency operations plans for schools, among other things, in his 2013 plan to reduce gun violence, the federal agencies jointly developed a Guide for Developing High-Quality School Emergency Operations Plans (Federal Guide)—the primary federal resource designed to help schools develop, implement, and revise their emergency operations plans. The Federal Guide, which identifies key planning principles for developing school emergency operations plans, states that it is considered informal guidance and schools and school districts are not required to adopt it. These principles include considering all threats and hazards and all settings and times, and call for following a collaborative process when creating and revising a plan. The Federal Guide suggests that schools use a six-step planning process, similar to the process established by FEMA for state and local emergency management planning, to develop, maintain, and revise their emergency operations plans. According to Education officials, this represents a shift in guidance from an emphasis on plan content to an emphasis on the planning process (see fig. 3). Education, DHS, HHS, and Justice also separately develop and provide resources such as guidance, training, technical assistance, and funding, in line with their respective missions, to help districts and schools prepare for emergencies. These include resources that directly support emergency operations plan development as well as those that more generally can be used to enhance districts’ and schools’ ability to prevent, protect against, mitigate, respond to, or recover from threats and hazards (see table 1). Despite the availability of resources from Education, DHS, HHS, and Justice for school emergency preparedness, our nationally representative survey of school districts found that an estimated 69 percent of districts did not rely on non-financial resources from any of these agencies to develop or implement their plans in recent years. Further, as shown in figure 4, we estimate that about one-third of districts or fewer relied on such resources from the agencies individually. Our survey of school districts and visits to districts and schools suggest that limited awareness of federal non-financial resources and reliance on local resources may be factors in districts’ limited use of such federal resources. For example, based on our survey, an estimated 37 percent of districts are aware of the Federal Guide or related resources from the REMS TA Center. Similarly, officials in 2 of the 12 schools we visited were familiar with the Federal Guide, though it is targeted to schools. In one school we visited, an official who was familiar with the Federal Guide told us that it was not user-friendly given its length (e.g., 67 pages). In addition, officials in two districts said they rely on their state for guidance, rather than on the federal government directly, as, for example, some federal standards and guidance may not be as tailored to them. According to our survey of state educational agencies, 35 states, representing a majority of school districts, provide the Federal Guide to their districts to assist in developing or implementing emergency operations plans. With this number of states doing so, it is unclear why there is limited awareness of the Federal Guide by districts nationwide. Education, DHS, HHS, and Justice collaborate on a number of individual agency initiatives to support district and school efforts to prepare for emergencies. According to federal officials, following the significant interagency collaboration required to produce the Federal Guide in 2013, as facilitated by the Office of the Vice President and National Security Council, Education has continued to collaborate with the agencies to develop resources to facilitate use of the Federal Guide, including by leading development of a related guide for school districts. Education and the agencies also developed resources that, though not necessarily explicitly prepared as an emergency preparedness resource, can be used to assist schools with other aspects of emergency preparedness, including prevention. For example, Education and Justice jointly developed a school discipline resource package designed, in part, to assist schools in creating safer environments, which can be an important step for prevention. Federal agency officials also told us they collaborate through various groups that address, to varying degrees, certain needs of schools and school children. For example, several agencies participate in the Comprehensive School Safety Initiative interagency working group, and a federal interagency policy sub-committee on active shooters, though this entity is not specific to schools and does not address the full range of threats and hazards schools face. Some agencies are also involved with the National Advisory Council on Children and Disasters. However, none of these entities primarily focus on the needs of schools for emergency management planning, which, given the presence of young children, can differ significantly from those of other institutions. Federal officials told us the partnerships that resulted from the Federal Guide have been valuable: one official said federal interagency collaboration is the best she has experienced in her 18 years with her agency. However, we identified gaps in recent federal agency coordination that suggest these efforts are insufficient in fully addressing the needs of schools. Insufficient coordination may compromise the ability of federal agencies to effectively support district and school emergency preparedness efforts, and risks hindering such planning to help protect students and staff in emergencies. We found: Not all relevant federal agencies are included in collaboration efforts. Transportation Security Administration (TSA) officials said the agency is not involved in federal interagency collaboration on school emergency management planning, including with the REMS TA Center, despite TSA having developed multiple resources on school transportation security, and knowing through its Baseline Assessment for Security Enhancement program that security for school bus transportation is often left out of district planning. Additionally, TSA officials told us they were also not involved in developing the Federal Guide because they were unaware of the effort—even though DHS, of which TSA is a part, was one of the agencies involved in developing the Federal Guide. Due to their lack of involvement in federal interagency efforts for school emergency management planning, other federal agencies may be unaware of TSA resources and unable to share TSA information with local stakeholders. Recognizing the importance of addressing transportation in school emergency management planning, the Federal Guide makes multiple references to it. Moreover, the REMS TA Center has elaborated on the importance of addressing transportation in planning, stating in informal guidance that effective emergency operations plans must include procedures for students and staff to follow during non- instructional times, including time when students are on a school bus. Leading practices on interagency collaboration state that it is important to ensure that all relevant participants have been included in the collaborative effort. Relevant agency officials are not always aware of each other’s efforts and resources, including within their own agency. Education and FEMA officials said they collaborate on various school emergency management planning initiatives. For example, Education officials said that FEMA was among its federal partners involved in developing a REMS TA Center tool designed to help districts and schools create and customize emergency operations plans, and the head of FEMA’s Emergency Management Institute (EMI) told us that staff provide such tools to their training participants. However, EMI officials we spoke with who are responsible for training courses for district and school staff on developing emergency operations plans said they were unfamiliar with these tools, raising concerns about communication and coordination within the agency. In another case, a FEMA official responsible for the Office of Counterterrorism and Security Preparedness, to whom we were referred by FEMA officials we interviewed about issues of coordination with DHS, said his office was not involved in the development of related guidance from DHS’ Office of Infrastructure Protection issued in April 2013 on developing a comprehensive K-12 school security program, which discusses how to develop an emergency operations plan. Further, the official indicated that he was unaware of the DHS guidance until its release. As a result, officials from DHS and FEMA were simultaneously involved in developing multiple resources for K-12 schools—including the Federal Guide, issued in June 2013—to prepare for emergencies without sufficiently coordinating these efforts. In reviewing the DHS guidance, we also found it makes no specific mention or reference to FEMA’s six-step planning process—the process on which the Federal Guide is based—though it includes a number of the same steps. Gaps in effective coordination and communication within and across agencies raise questions about the efficient use of resources, and the extent to which these related resources may be overlapping, duplicative, or fragmented. Officials from Education and FEMA told us greater collaboration is needed and welcome, particularly in light of limited resources. Leading practices state that the challenges posed by continuing federal budget constraints call for agencies to work together more closely to leverage limited resources to achieve their missions. Agencies that collaborate offer different interpretations of the same federal guidance. Education and the FBI were partners, among others, in producing the Federal Guide; but, since its completion, these agencies now publicly offer different positions on the Federal Guide’s Run, Hide, Fight model. This model describes—in order of preference—the steps adults should take when confronted by an active shooter (see sidebar). Specifically, an Education official said in producing the Federal Guide the federal agencies agreed to exclude students from involvement in the option of fighting an active shooter and instead included language that focused solely on adults. In contrast, an FBI official stated that the Federal Guide is designed to allow each community to determine whether to discuss with high school students the option of fighting, and set its own standards on how to discuss the Run, Hide, Fight model with school- age children—views which have been reported publicly. The FBI official also said the goal of the Federal Guide is not to develop a one- size-fits-all plan, but rather, to have district officials, principals, teachers, parents, and local first responders decide what is best for their community. The FBI official stated that, though student involvement in the option of fighting is not included in the Federal Guide, the FBI does not take a position on whether or not school districts should teach children to consider the option of fighting. In addition, the official told us the FBI’s position aligns with Federal Guide and noted that adults should consider the option to fight and be trained accordingly. Given that the Federal Guide explicitly refers only to adults when discussing the fight portion of Run, Hide, Fight, conflicting views from federal agencies may create confusion for districts and schools in interpreting this aspect of the federal recommendations. Leading practices on interagency collaboration state that it is important to address the differences created by diverse organizational cultures to enable a cohesive working relationship and create the mutual trust required to enhance and sustain a collaborative effort. Education and FBI officials told us they meet regularly with other federal partners through a federal interagency policy sub-committee on active shooters, facilitated by White House staff; however, their collaboration through this mechanism—which is not exclusively focused on school emergencies—has not yielded a consistent federal message to the public about whether and how students should be involved in Run, Hide, Fight. In the absence of a well-coordinated strategy for school emergency management planning efforts, federal agencies have taken a piecemeal approach to these efforts, which contributes to the gaps we have identified. Education officials said that, especially since the issuance of the Federal Guide, federal agencies currently face challenges around coordination, resulting in efforts that have developed organically and incidentally and without a strategic focus. Specifically, with their limited resources, agencies determine their priorities and initiatives—and the resources devoted to them—on an individual agency basis; meanwhile, the emergency management and safety needs of schools are numerous and complex. Acknowledging the value of interagency collaborative efforts, these Education officials also said that such efforts help avoid duplicative and inconsistent efforts across agencies. While officials from FEMA and Justice did not identify specific challenges with federal agency coordination in this area, these agencies focus more generally on emergency planning and not specifically on the needs of school districts and school emergency management planning—the area in which Education identified issues. As efforts to develop the Federal Guide came to a close, Education officials told us that the agencies discussed the need to continue to coordinate federal school emergency preparedness efforts moving forward. According to these officials, the presidential plan that required development of the Federal Guide did not designate a lead agency going forward or give any agency direct authority or responsibility to convene an interagency working group or require the participation of other federal agencies. However, they said that the Department of Education Organization Act provides the agency the general authority to collaborate with other federal agencies to maximize the efficiency and effectiveness of its programs and, where warranted and agreed upon, to serve as the lead agency in such collaborations. Importantly, Education officials also stressed that the ability to successfully carry out such activities relies to a large degree on other federal agencies’ cooperation as well. Staff from the Office of Management and Budget (OMB)—the agency responsible for, among other things, communicating the President’s directions to Executive Branch officials regarding specific government-wide actions— told us that while they may become involved in federal interagency efforts absent clear leadership, federal efforts around school emergency preparedness are best handled by agencies and monitored by OMB through, for example, review of administration policy. The absence of an interagency body to coordinate related federal efforts may hinder the ability of federal agencies to successfully address the complex emergency management needs of schools. Leading practices such as (1) identifying leadership for collaborative efforts; (2) defining and agreeing to common outcomes, and assigning accountability for these collaborative efforts; (3) identifying all relevant participants; and (4) identifying necessary resources have been shown to improve the likelihood of success for federal interagency efforts. Further, the Government Performance and Results Act of 1993 (GPRA), as updated by the GPRA Modernization Act of 2010, establishes a framework for a crosscutting and integrated approach by agencies to focus on results and improve government performance. This framework includes identifying how an agency is working with other agencies to achieve its performance goals, in that well-coordinated strategies can reduce potentially duplicative, overlapping, and fragmented efforts. According to our survey of state educational agencies in the 50 states and the District of Columbia, 32 states reported requiring that districts have emergency operations plans and 34 states reported requiring that schools have plans, and 25 states reported requiring plans for both (see fig. 5). Additionally, many states also allowed districts and schools to determine the specific content of these plans. The states that reported they have requirements for districts and/or schools represent about 88 percent of K-12 students nationwide. Thus, even though not all states reported requiring plans, our district survey found an estimated 97 percent of districts nationwide had a plan, which can help schools plan for potential emergencies as noted by the Guide for Developing High-Quality School Emergency Operations Plans (Federal Guide). Even though most states reported requiring districts and/or individual schools to have plans, our state survey found that many do not set forth specific requirements on plan content, and that the degree to which states require plans to contain specific content varies widely. For example, as shown in figure 6, 29 states reported requiring schools to address lockdown procedures in their plans while 10 reported requiring school plans to address continuity of operations. Similarly, 25 states reported requiring district plans to address evacuation procedures while 9 reported requiring district plans to address continuity of operations. Our state survey also asked about requirements that district or individual school plans address the needs of specific populations of students, including those with disabilities, and found that fewer than half of states reported having such requirements; however, our district survey shows that most districts included these procedures in their plans. Specifically, 21 states reported requiring district or school plans to address the needs of individuals with disabilities or special needs. Separately, our district survey shows that an estimated 69 percent of districts nationwide reported having procedures in their district or school plans that support the access and needs of the whole school community, including these individuals. For example, in one of the elementary schools we visited, the school calls for a “buddy system” to help each special needs student evacuate during an emergency. The school’s plan also notes that special equipment, such as lights or horns, might be required to alert students with certain sensory disabilities during emergencies. In a similar trend, according to our state survey, fewer than 10 states reported requiring either districts or schools to address the needs of individuals with limited English proficiency. Our district survey shows that an estimated 45 percent of districts had procedures in their district or school plans for communicating with parents or students who are limited English proficient. Another example of an area where states allowed districts and schools to determine the content of their plans is threats or hazards. According to our state survey, fewer than half of states reported requiring districts or individual schools to have plans that address certain specific threats or hazards, such as active shooter, infectious diseases, or food safety, but our district survey found that most districts have plans that do so. However, for fires and natural disasters, our state survey shows that about half or more of states reported requiring that plans address these specific threats and hazards (see table 2). According to our state survey, 32 states reported requiring that districts conduct emergency exercises of their plans, such as drills, while 40 reported requiring that individual schools conduct them. The states that reported having requirements for districts and/or schools to conduct exercises represent about 83 percent of K-12 students nationwide. An estimated 96 percent of districts or their schools conducted emergency drills during school years 2012-13, 2013-14, and/or 2014-15, according to our district survey. Our state survey found that district and school fire drills were most frequently required, and active shooter drills significantly less so (see fig. 7). While our survey did not ask why certain drills were required more than others, Education officials told us that as part of emergency management planning, schools need to assess the likelihood of active shooter incidents, which present a smaller risk than other emergencies. Based on our survey of 51 state educational agencies, nearly all states provided training, technical assistance, or guidance to districts to assist in developing or implementing emergency operations plans. In addition, nine states provided state funds to districts in one or more fiscal years 2013 through 2015 and five of the nine provided funding in all three years. Officials in one of the states we visited said their state offers technical assistance and training to some districts and schools, and their state educational agency website has links to state and federal resources on planning for emergencies. In addition, two of the states we visited have state school safety centers that provide technical assistance and guidance to districts and schools (see sidebar). As part of their efforts to provide districts with support for developing or implementing plans, 47 state educational agencies collaborated with their state emergency management agency, according to our state survey. In addition, in the three states we visited, state officials discussed collaboration among state agencies on school emergency management planning. The state educational agency officials we met with in these three states said they work with other state agencies on school emergency management issues. For example, in one state we visited, the state educational agency worked with the state departments of public safety, and health and human services on a school safety and security task force that set forth recommendations to districts and schools on making schools safe without compromising educational goals. State education officials we met with also cited challenges they face when supporting district and school efforts to plan for emergencies. Officials in two of the three states we visited said limited resources, staff, and funding are challenges. More specifically, officials in one of these states said their office does not have sufficient staff or resources to provide emergency management planning assistance and training to schools on a wide scale. In another state, officials told us that limited funding and staff hinder the state’s ability to help districts and schools plan for emergencies. Over half of states reported requiring districts or individual schools to have plans, as noted above, and fewer than half of states reported requiring that either district or school plans be reviewed at least every 2 years, according to our state survey. Similarly, fewer than half also reported requiring that either districts or state educational agencies review district or school plans. For those states that did report having requirements to review plans, 24 states required that districts review their own plans, and 24 states required that districts review school plans. Further, an estimated 79 percent of districts that required their schools to have plans also required their schools to submit these plans for district review, according to our district survey. Most school districts involved a wide range of community members, particularly school personnel and first responders, when developing and updating their emergency operations plans, according to our nationally representative survey of school districts (see fig. 8). Our prior work has shown similar levels of involvement with one notable difference: engagement of school resource officers, who are sworn law enforcement officers working in a school setting, increased from 42 percent in 2007 to 89 percent in 2015. Our district survey also found that an estimated 92 percent of districts recently updated their plans. Further, we estimate that almost all of the districts that require schools to have plans also require schools to update and review those plans. School and district officials at the sites we visited also noted that school personnel and first responders were involved in their plan development. Officials in all five districts we visited in three states told us they had staff committed to emergency management planning. Further, officials at 9 of the 12 schools we visited also said they had teams responsible for such planning, many of which met regularly and included a variety of members. Officials from two schools added that their parent-teacher associations are supportive of emergency preparedness efforts and have provided funding for emergency supplies. These practices align with a recommendation in the Federal Guide, specifically, that school emergency management planning not be done in isolation. The Federal Guide also notes that such collaboration makes more resources available and helps ensure the seamless integration of all responders. Relatedly, we found that an estimated 68 percent of districts incorporated their district plans into the broader community’s emergency management system. For example, officials from one district we visited said their school district is a part of the city’s emergency operations plan and has responsibility for providing shelter during emergencies. As part of developing and updating an emergency operations plan, the Federal Guide recommends that districts and schools assess the risks posed to them by specific threats and hazards. Based on our survey data, we estimate that more than three-quarters of districts recently conducted such assessments of their vulnerabilities (see fig. 9). For example, officials from one district told us that they conduct an annual safety assessment of each school. In doing so, they assess physical and access control, such as fences and locks. Officials from another district said assessments led to security enhancements, such as adding fences to sports fields, panic buttons that connect office staff to emergency officials, and software to run instant background checks. Our district survey found that most school districts had emergency operations plans that address multiple threats and hazards, such as intruders, fires, active shooters, natural disasters, and bomb threats (see fig. 10). We also observed during visits to schools that a school’s particular circumstances, such as location, affect the threats and hazards they face. For example, officials from one school said their plan includes not only common threats and hazards, such as fires, but also those associated with facilities nearby, including an airport and chemical plant. According to our survey, districts generally had plans that address most of the emergency response procedures recommended in the Federal Guide, such as evacuation and shelter-in-place (see fig. 11). For example, we estimate that almost all districts had procedures in place for evacuation, lockdown, and communication and warning. To illustrate, officials from five schools we visited said they use an automated messaging system to notify parents of an emergency. In contrast, our district survey estimates that about half of districts specified how they would maintain continuous operations or recover after an incident. For example, officials we interviewed from one school said that while their plan does not comprehensively address how they would maintain continuous operations after an incident, it does specify some aspects: if the school needed to close for an extended period, lessons could continue via a web portal. It is not readily apparent why fewer districts included these types of procedures in their plans. Education’s REMS TA Center offers several resources on these topics such as an online course for developing continuity of operations procedures, among others. The Federal Guide recommends that emergency operations plans provide for the access and functional needs of the whole school community, including persons with disabilities and people with limited English proficiency, among others. As previously noted, our district survey found that an estimated 69 percent of districts had plans with procedures supporting persons with disabilities. An additional 22 percent of districts also had such procedures outside of their plans. We also learned about such procedures in some of the schools we visited. For example, officials from one school said that they used specially colored markers on the walls to guide a visually impaired student toward the exit. Officials from another school told us that during fire drills students who are highly reactive to loud noises such as a fire alarm are proactively given noise- reducing headphones. As noted earlier, our district survey estimated that 45 percent of districts had plans that address procedures for communicating with students or parents who are limited English proficient. An additional 26 percent of districts also reported having such procedures outside of their plans. Our visits to schools revealed specific examples. Officials from one school said teachers of students with limited English proficiency walk through each step of a drill to ensure these students understand. Regarding communication with parents who are limited English proficient, officials from one school said they have a contract with a language translation service to connect a school administrator, translator, and parent via conference call, when necessary. Our survey estimated that most school districts recently completed a variety of emergency exercises, such as drills and group discussions, and many did so regularly with first responders (see fig. 12). According to the survey, almost all districts conducted drills. Officials from four schools we visited said they explain drill procedures to students in an age-appropriate way, a practice recommended by the Federal Guide. For example, officials at one school we visited said that during a lockdown drill kindergarten teachers tell their students that a wild animal may be loose in the building. In contrast, our survey estimated that fewer districts completed functional and full-scale exercises, which require a significant amount of planning, time, and resources. For example, officials from one school district told us they participated in a city-wide functional exercise that involved various community partners, such as the public health department. They said the 8-hour session helped participants better understand their roles during an emergency, for example, the responsibilities of school principals. Districts that conducted drills, functional exercises, or full-scale exercises did so for specific threats or for certain procedures. For example, our survey found that almost all performed such exercises for fires and lockdowns (see fig. 13). This aligns with our state survey findings that many states reported requiring districts or schools to conduct such exercises. Our survey also found that fewer districts—an estimated 67 percent— conducted active shooter exercises. In the districts we visited, we heard about some reasons for this. Officials from two districts said these exercises can create anxiety within the school community, including among parents. Officials from one of these districts noted the difficulty of striking a balance between providing knowledge and inciting fear, particularly at schools with younger children. Based on our survey, we estimated that about half of districts practiced their emergency exercises annually with law enforcement and fire department officials (see fig. 14). Similar to the benefits cited in developing plans with community involvement, officials from two schools we visited told us that firefighters and police officers observe and provide feedback on their drills. Officials from one school cited the advantages of such interactions as strengthening community relationships as well as providing first responders with helpful information in advance of an emergency, such as a school’s layout. However, our survey estimated that about a quarter of districts reported having never practiced with emergency medical services or emergency management officials, and about a third never practiced with public health officials. Following such exercises, the Federal Guide recommends that officials gather to evaluate how the process went, identify shortfalls, and document lessons learned. We found examples of this at the schools we visited. For example, officials from 7 of the 12 schools we visited said that they debrief after drills to determine what lessons could be learned. During our interviews with schools, we learned of such improvements. For example, officials at one school realized teachers could not lock their classroom doors without stepping into the hallway, potentially placing them in harm’s way. Officials remedied the problem by placing a magnet over the door’s locking mechanism which can be quickly removed to lock the door in an emergency. Another school discovered that all teachers need two-way radios during drills for effective communication. Based on our survey, an estimated 59 percent of districts reported difficulty balancing emergency management planning with higher priorities, such as instructional time. The survey also estimated that about half of districts reported that these competing priorities made it difficult to coordinate with community partners and organizations. Relatedly, it also estimated that more than half of districts felt that they did not conduct enough training because of limited time. Our visits to states and schools revealed similar challenges. Officials from one state told us that district and school staff had inadequate time for emergency management planning. Similarly, officials from 6 of the 12 schools we visited reported difficulty finding sufficient time to plan for emergencies, train staff, or conduct drills, with several noting that such activities competed with other school priorities. Officials from one school suggested additional professional staff days were needed, but said that negotiating such days can be difficult. According to our survey, an estimated 49 percent of districts cited a lack of staff expertise and an estimated 42 percent of districts reported insufficient equipment as impediments to emergency management planning. For example, officials from one state we visited said that teachers are not trained in emergency management, such as on how to conduct table top exercises. In addition, officials from several districts and schools said obstacles to emergency preparedness can include schools’ physical aspects. For example, officials from one district said that schools with portable classrooms cannot use their intercom system to announce emergency drills, but rather must connect to those classrooms using a phone line. Further, officials from two of the three states we visited and from Education said districts and schools have limited resources for emergency management planning. As mentioned previously, our state survey found that few states reported providing funding to help districts develop or implement their plans. Federal Education officials echoed a similar opinion stating that in an environment of constrained resources, districts and schools tend to focus almost exclusively on response activities, as opposed to the other four preparedness areas (prevention, protection, mitigation, and recovery). They suggested that this could have serious implications for schools and districts. For example, they said some districts and schools do not conduct thorough assessments of their risks and vulnerabilities and some school plans are not adequately customized because they are overly reliant on district-provided plan templates. Confronting the range of threats and hazards to the nation’s 50 million public school students necessitates careful and comprehensive emergency management planning by school districts and schools. We were encouraged to note that nearly all districts reported having emergency operations plans and, as recommended in the 2013 Federal Guide for school emergency planning, involve a range of school personnel and community partners in developing and updating them, recognizing the critical importance of collaborating with stakeholders both within and outside the school community. However, a majority of districts confront competing priorities with limited resources, which could hamper emergency management planning efforts, thus reinforcing the value of state and federal support. Education and other federal agencies individually offer a breadth of resources that districts and schools can use in their emergency planning. Although individual agencies continue to work on a range of emergency preparedness issues, and, in some cases, have continued to collaborate with other agencies in doing so, current collaboration efforts are insufficient to comprehensively address the complex and unique needs of schools. For example, an existing federal interagency group on active shooters was not created to address the range of threats and hazards schools face, nor to be specific to schools’ needs, which, given the presence of young children, can differ significantly from those of other institutions. Moreover, in the absence of a well-coordinated federal strategy for school emergency preparedness planning, federal agencies’ piecemeal approaches to school emergency management planning contribute to the gaps we identified in timely, continued, and most importantly, strategic coordination, and risk wasting limited federal resources on efforts that may be overlapping, duplicative, or fragmented. To help protect students entrusted to public schools from natural and man-made threats and hazards, it remains critical for federal agencies to address key considerations shown to improve the likelihood of success for interagency collaboration on a well-coordinated federal strategy. The Department of Education stated that it has the general authority to collaborate with other federal agencies to maximize the efficiency and effectiveness of its programs, and to serve as the lead agency in such collaborations where warranted and agreed upon. Absent agreement on a strategy consistent with leading collaboration practices, which include (1) identifying leadership for the effort; (2) defining and agreeing to common outcomes, and assigning accountability for these collaborative efforts; (3) identifying all relevant participants; and (4) identifying necessary resources; federal agencies may, over time, lose momentum and undermine the progress that has already been made, and risk providing support that is less effective than it otherwise could be. Using its general authority to collaborate with other federal agencies, we recommend that the Secretary of Education convene its federal interagency partners to develop a strategic approach to interagency collaboration on school emergency preparedness. This group could include designees or delegates from the Secretaries of DHS, HHS, and the Attorney General, including representatives from relevant agency components, such as FEMA, TSA, and the FBI, and others as appropriate, and should incorporate leading federal interagency collaboration practices, for example, by: defining outcomes and assigning accountability, including all relevant participants, and identifying necessary resources. We provided a draft of this report to the Departments of Education (Education), Health and Human Services (HHS), Homeland Security (DHS), and Justice (Justice) for review and comment. Education provided written comments that are reproduced in appendix II. Education, DHS, and Justice also provided technical comments, which we incorporated as appropriate. HHS did not provide comments. In written comments, Education stated that it shares the view outlined in the report that improved federal coordination will better assist K-12 schools in preparing for emergencies, and noted that other federal agencies, including especially FEMA, play a significant role in school emergency preparedness. Additionally, Education cited the importance of involving other relevant agencies in obtaining agreement on the assignment of roles and responsibilities, including selecting a lead agency charged with primary responsibility for coordinating federal emergency preparedness assistance to K-12 schools. Given the roles of other agencies, Education encouraged us to modify the recommendation that was included in the draft that was provided to agencies for comment. Specifically, in that draft we recommended that Education convene and lead an interagency collaborative group on school emergency planning, consistent with leading practices. In light of Education’s response, which we agree is consistent with leading practices on federal interagency collaboration that, among other things, include identifying leadership for the collaborative mechanism and all relevant participants, we modified the recommendation and report accordingly. We believe that doing so will help increase the likelihood of achieving a well-coordinated federal strategy in which all relevant federal partners are identified, included, and invested—helping, ultimately, to reduce the risk of wasting limited federal resources on efforts that may be overlapping, duplicative, or fragmented. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretaries of Education, Health and Human Services, and Homeland Security, the Attorney General, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs should have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This report addressed the following questions: (1) how do federal agencies support school emergency management planning and to what extent do they coordinate their efforts; (2) to what extent do states require and support efforts to plan for school emergencies; and (3) what have school districts done to plan and prepare for emergencies and what challenges, if any, have they faced? In addressing these objectives, we conducted interviews with officials from the Departments of Education, Homeland Security, Health and Human Services, and Justice and with staff at the Office of Management and Budget; and reviewed relevant federal documents, such as the Guide for Developing High-Quality School Emergency Operations Plans. We also reviewed leading practices on interagency collaboration to assess the collaborative efforts of these agencies. We also deployed three web- based surveys: one to state educational agencies, another to state administrative agencies, and a third to a stratified random sample of school districts. We also conducted site visits during which we interviewed state, district, and school officials in three states. To better understand the role of states in how school districts and schools prepare for emergencies, we administered two web-based surveys—one to state educational agencies and a separate one to state administrative agencies—to all 50 states and the District of Columbia. We asked state educational agencies about their requirements of and recommendations for districts and schools regarding emergency management planning, among other things. We asked state administrative agencies about receipt and distribution of certain federal funds to districts or schools for emergency management planning activities. We administered these surveys from April through July 2015. For both surveys, all 51 state agencies responded, resulting in response rates of 100 percent. To better understand how districts and schools plan and prepare for emergencies, we also administered a third web-based survey. We obtained data from Education’s National Center for Education Statistics, which maintains the Common Core of Data for public school districts, for the 2012-13 school year, which was the most recent data available. We originally selected a stratified random sample of 598 from a population of 16,284 school districts, with strata based on size and urban status, but ultimately excluded 25 districts from our original population and sample because they had closed, operated exclusively online, were located in a juvenile detention center, or had fewer than 5 students, and thus were not considered eligible for our survey. This resulted in a sample of 573 from the eligible population of 16,259 districts (see table 3). We administered this survey to districts from April through July 2015 and 403 districts, or 70 percent of our sample, responded to the survey. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we expressed our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, all percentage estimates in this report have confidence intervals within plus or minus 7 percentage points. For other estimates, the confidence intervals are presented along with the estimates themselves. In the survey, we asked questions about the emergency operations plans of districts and their schools, such as about plan development and implementation, plan content, training and resources, and challenges to emergency management planning. The quality of both the state and district survey data can be affected by nonsampling error, which includes, for example, variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, nonresponse error (failing to collect data on members of the sample or answers to individual questions from respondents), and data collection and processing errors. To minimize such error, we included the following steps in developing the survey and in collecting and analyzing survey data. We pre-tested draft versions of the instrument with state educational agency officials in four states, state administrative agency officials in two states, and officials in four districts to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made revisions to all three surveys. Further, using a web-based survey and allowing state and district officials to enter their responses into an electronic instrument created an automatic record for each state and district and eliminated the errors associated with a manual data entry process. To increase response rates, we sent e-mails and placed phone calls to recipients of all three surveys. We conducted a nonresponse bias analysis to assess the potential difference in answers between those school districts that did participate in the survey and those that did not. We determined components of the sampling strata and school district size to be significantly associated with the propensity to respond. We adjusted the sampling weights for these characteristics using standard weighting class adjustments to compensate for possible nonresponse errors and treat the respondent analyses using the nonresponse adjusted weights as unbiased for the population of eligible school districts. In addition, the programs used to analyze the survey data were independently verified to ensure the accuracy of this work. To understand emergency management planning at the local level, we conducted site visits in three states from February to May 2015. The states we visited included Massachusetts, Texas, and Washington. We selected states that represent geographic diversity and varied across characteristics, such as type of federal funding for emergency preparedness and whether there was a state school safety center, which provides training and guidance to enhance school safety and security. In each state, we interviewed state education officials, including staff from state school safety centers, if applicable. Within these states, we also interviewed officials from five school districts, which were selected to reflect a mix of urban, suburban, and rural areas. In each district, we interviewed officials from at least two schools of varying student ages. In one state, we also interviewed officials from a charter school that was independent of these five districts and received federal funding for school emergency preparedness. In total, we interviewed officials from 12 schools. Information obtained during these interviews is not generalizable, but provides insight into school emergency management planning at the state, district, and school level. We conducted this performance audit from October 2014 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Kathryn Larin and Janet Mascia (Assistant Directors), Avani Locke (Analyst-in-Charge), Teresa Heger, Kathryn O’Dea Lamas, Sheila McCoy, Jean McSween, and James Rebbe made significant contributions to this report. Also contributing to this report were Susan Aschoff, Deborah Bland, Christopher Currie, Christopher Keisling, Ruben Montes de Oca, Mimi Nguyen, Erin O’Brien, William Reinsberg, Paul Schearf, Salvatore Sorbello, Sonya Vartivarian, and Sarah Veale.
The 2012 school shootings in Newtown, Connecticut and the 2013 tornado in Moore, Oklahoma stress the need for schools to prepare for emergencies to help protect the 50 million students in K-12 public schools. In 2007, GAO found that most districts developed emergency operations plans and GAO made recommendations to improve school emergency planning. In 2013, the President directed Education, DHS, HHS, and Justice to help schools with their plans. GAO was asked to report on these efforts. This report examines (1) how federal agencies support school emergency management planning and the extent to which they coordinate efforts; (2) the extent to which states require and support efforts to plan for school emergencies; and (3) what districts have done to plan and prepare for school emergencies and challenges faced. GAO interviewed federal officials and surveyed relevant state agencies in all 50 states and the District of Columbia. GAO also surveyed a generalizable random sample of 573 districts (70 percent response rate), and visited 5 districts and 12 schools in 3 states selected to reflect diverse locations and characteristics. The Departments of Education (Education), Health and Human Services (HHS), Homeland Security (DHS), and Justice (Justice) support K-12 schools in preparing for emergencies with various resources, including training, technical assistance, and funding, but their efforts are not strategically coordinated. Since jointly issuing a Guide for Developing High-Quality School Emergency Operations Plans in 2013 in response to a presidential plan, individual agencies have continued to work on a range of emergency preparedness initiatives, sometimes collaboratively; however, with the guide completed and no strategic coordination of agency efforts particular to schools, federal agencies have taken a piecemeal approach to their efforts. GAO found gaps in coordination that suggest recent efforts are insufficient: not all relevant agencies and officials are included in collaborative efforts or are aware of related efforts and resources, and agencies are offering different interpretations of the same federal guidance—all of which risks wasting limited federal resources on duplicative, overlapping, or fragmented efforts. Education officials said that although agencies discussed the need to continue coordinating following the guide, the presidential plan did not designate a lead agency going forward, nor give any agency direct authority or responsibility over an interagency effort, or require agency participation. However, these officials said Education has general authority to collaborate with other federal agencies to maximize the efficiency and effectiveness of its programs and to serve as the lead agency, where warranted and agreed upon. Leading practices on federal interagency collaboration include identifying leadership, relevant participants, and resources, and agreeing on outcomes. Absent a well-coordinated effort, agencies will continue to determine their priorities individually, which may hinder assistance to schools. In GAO's survey of 51 state educational agencies, 32 states reported that they require districts to have emergency operations plans, 34 reported they require schools to have plans, and almost all states reported providing training, technical assistance, or guidance to support districts in developing or implementing plans. GAO's survey also found that 32 states reported requiring districts to conduct emergency exercises, such as drills, and 40 states reported requiring individual schools to do so. In addition, many states reported allowing districts and schools to determine specific plan content, with fewer than half reporting that they required districts or states to review district or school plans. GAO's generalizable survey of school districts estimates that most districts updated and practiced their emergency operations plans with first responders, but struggled to balance emergency planning with other priorities. GAO's survey results also found that most districts had plans addressing multiple hazards and emergency procedures, such as evacuation. However, GAO estimates about half of districts included procedures on continuing operations or recovering after an incident. GAO also found most districts conducted emergency exercises, such as fire drills, and about half did so annually with police and fire department officials. However, an estimated 59 percent of districts had difficulty balancing emergency planning with higher priorities, such as classroom instruction time. GAO recommends that Education convene its federal interagency partners to develop a strategic approach to interagency collaboration on school emergency preparedness, consistent with leading practices. Education agreed that such improved federal coordination will better assist schools in preparing for emergencies.
To address its financial crisis and make its operations more efficient, in 1995 Amtrak undertook a major corporate restructuring, along with developing its Strategic Business Plan. The restructuring involved dividing Amtrak’s intercity passenger service operations into three distinct operating units, called strategic business units. The Northeast Corridor Unit is responsible for operations on the East Coast between Virginia and Vermont, including high-speed Metroliner service, which currently exists between Washington, D.C., and New York and is being extended to Boston. The West Coast Unit is responsible for services in California, Oregon, and Washington. This unit operates only one long-distance passenger train, and many of its services, especially in California, receive state financial support. Finally, the Intercity Unit provides the remainder of the nation’s intercity rail passenger service, including most of the long-distance, cross-country trains. Each strategic business unit develops its own plan and manages its own operations, although under the direction of the corporate parent in Washington, D.C., which also provides business services, such as legal support. To eliminate the need for a federal operating subsidy, Amtrak plans to increase revenues, hold down costs, and increase state contributions. Amtrak’s projected annual operating loss is to be reduced to $180 million in fiscal year 2001 in part by increasing revenues from $1.461 billion in fiscal year 1995 to $2.565 billion in fiscal year 2001. During this same period, expenses are planned to increase less than 20 percent, from $2.305 billion to $2.745 billion. Increasing the portion of costs borne by each state for the services they support financially is planned to increase state funding from $36 million in fiscal year 1995 to $132 million in fiscal year 2001. Figure 1 shows Amtrak’s financial projections for reducing operating losses, holding down cost increases, and increasing revenues. In each instance, the figure presents Amtrak’s financial projections based on what would occur if (1) Amtrak took no actions to address its financial condition (i.e., if it had not taken any actions in fiscal year 1995); (2) Amtrak took no further actions to improve its financial condition after fiscal year 1995; and (3) Amtrak successfully implements its Plan in fiscal year 1996 and beyond. Tables 1 through 3 show the specific amounts for each projection for each fiscal year. Amtrak’s ambitious plan to almost double revenues by 2001 includes several actions intended to attract more riders and increase the revenue generated by each passenger. Marketing efforts and fare increases are the bases for increasing passenger revenues. In fiscal year 1996, a $15 million advertising investment is projected to generate an additional $35 million in revenues, and fare increases are planned to generate almost $16 million in additional revenues. Other revenue-generating plans include increasing (1) the amount of service Amtrak provides under commuter rail service contracts (adding almost $9 million in fiscal year 1996), (2) reimbursable work for state departments of transportation and others (adding $16.5 million in fiscal year 1996), and (3) mail and express service (adding almost $10 million in fiscal year 1996). Amtrak plans to control expenses through productivity improvements, operating efficiencies, and selective restructuring of routes and services. For example, Amtrak plans to save $15 million in fiscal year 1996 by better matching equipment to service needs. It also plans to reduce costs by $4 million in fiscal year 1996 by improving the productivity of Amtrak’s reservations office. Improving price negotiations, specifications, and other aspects of the procurement of goods and services is projected to generate $56.9 million in savings in fiscal year 1996. The increase in state contributions is expected to occur as Amtrak shifts an increasing portion of the costs of state-sponsored rail services to the states. Currently, the states pay only a portion of the costs, but Amtrak is increasing the portion annually and plans to receive 100 percent of these costs from the states by fiscal year 1999. State contributions are planned to almost double from $36 million in fiscal year 1995 to $67.4 million in fiscal year 1996 as this transition begins. In fiscal year 1995, the first year under its Strategic Business Plan, Amtrak reduced its operating loss from the $1.0145 billion projected without the implementation of the Plan to $843.8 million, or by $171 million, which was about $3 million less than it had planned. The fiscal year 1995 savings resulted primarily from reducing and eliminating some routes and services ($54.2 million), cutting management positions ($30 million), and raising fares ($23.5 million); all of these amounts exceeded what was projected in the Plan. Retiring older equipment and negotiating productivity improvements with labor, which were planned to reduce the operating loss by $11 million and $26 million, respectively, were elements of the Plan that were not successfully implemented. Because the states elected to “buy back” some of the services Amtrak had planned to eliminate, the corporation was not able to achieve its planned cost savings from retiring some of its oldest equipment that is used on these routes. To date, Amtrak has made little progress in negotiating new productivity improvements, such as reducing the size of its train crews, with its labor unions. Amtrak currently is purchasing new equipment so that it can retire the older equipment and has proposed legislation for contracting out and for negotiating new labor agreements. Amtrak has compensated for the savings the originally anticipated actions were to have generated. Amtrak projects that the fiscal year 1995 actions will reduce the operating loss by $315 million beginning in fiscal year 1996 as the changes made during fiscal year 1995 are in place and accruing savings for a full year. The Plan included actions to reduce the fiscal year 1996 operating loss by an additional $61.6 million by increasing revenues $81 million while holding expenses to a net increase of only $19.8 million. Thus, the operating loss was to have been reduced from $1.0897 billion projected without the implementation of the Plan to $712.3 million, but on the basis of second quarter results, Amtrak revised its fiscal year 1996 projection in April 1996. The revised projected operating loss is $768.7 million. The $56.4 million shortfall from the projected operating loss shown in table 1 was primarily due to the severe winter weather in fiscal year 1996. The results for each strategic business unit vary. The West Coast and Northeast Corridor units both exceeded their fiscal year 1995 planned savings and are projected to meet or nearly meet their fiscal year 1996 targets. In contrast, the Intercity Unit did not meet its planned reduction in its fiscal year 1995 operating deficit by $41.7 million and is projected to end fiscal year 1996 $19.4 million overbudget. Thus, the Intercity Unit—responsible for the bulk of Amtrak’s services and projected improvements—has been substantially overbudget in both years. Table 4 shows the planned and actual revenues, expenses, and operating losses for each unit. The West Coast Unit, which operates commuter service along several routes as well as intercity service in and between California, Oregon, and Washington, had the smallest share of Amtrak’s services and costs and the smallest target for fiscal year 1995 savings ($13 million). The West Coast Unit is expanding its services in fiscal year 1996, which will increase its operating deficit slightly for fiscal year 1996 but result in future savings if the projected ridership and revenues materialize. The West Coast Unit is focusing on increasing the amount of commuter service it provides under contract and on aggressive marketing and pricing strategies to reduce its share of the operating loss. Although ahead of the Plan’s projections in the first half of fiscal year 1996, the West Coast Unit is now projecting a $2.9 million budget shortfall for year’s end because of lost revenues and increased costs caused by the severe winter weather. In fiscal year 1995, the Northeast Corridor Unit, which generated more than 55 percent of Amtrak’s passenger revenues while incurring 45 percent of Amtrak’s expenses, reduced its operating loss $2.6 million more than planned. After the first two quarters of fiscal year 1996, the Northeast Corridor’s operating loss is higher than planned, but specific actions, such as productivity improvements in the mechanical shop, are under way to largely compensate for this by the year’s end. However, the future success of the Northeast Corridor depends on the availability of capital to make the investments necessary to complete the electrification of the line and introduce high-speed (maximum speed of 150 mph) rail service between Boston and New York City by fiscal year 2000 and to rebuild the southern end of the corridor between Washington, D.C., and New York, which is in a serious state of disrepair. In contrast, the Intercity Unit—which is the heart of the nationwide intercity network responsible for more than 80 percent of Amtrak’s total route miles of service—was $41.7 million overbudget in fiscal year 1995. For fiscal year 1996, the Intercity Unit is not meeting its portion of the Plan’s goal and after two quarters is projecting a year-end operating loss $19.4 million over its budget. For fiscal year 1995, the Plan had assumed that more than a dozen of the Intercity Unit’s routes would be eliminated or subject to service reductions. But many of the proposed eliminations or reductions were “bought back” by the states in which the routes are operated, increasing the Intercity Unit’s operating loss in fiscal year 1995 because the states did not fund 100 percent of the costs of these routes and services. However, the “buybacks” only account for about $10 million of the Unit’s $41.7 million fiscal year 1995 budget overrun and are not a factor in the projected fiscal year 1996 shortfall because the Plan took them into account for this fiscal year. The Intercity Unit has experienced several unanticipated problems, including a high turnover of senior management, which has interrupted the Plan’s implementation several times; unexpected declines in ridership as a result of fare increases; and, in fiscal year 1996, the severe winter weather that reduced ridership and increased operating costs. Even though Amtrak as a whole reduced its annual operating loss by $171 million in the first year of the Strategic Business Plan, significant improvements are necessary in the remaining 5 years of the Plan for the corporation to meet its longer-term goal of operating self-sufficiency. Although Amtrak reduced its operating loss as planned in fiscal year 1995, it will not achieve its original goal for fiscal year 1996. Additionally, the future-year projections are based on several critical assumptions that may not be realized, including continued federal capital support; the introduction of high-speed rail service in the Northeast Corridor and concurrent revenue increases; improvements in productivity that require negotiations with Amtrak’s unions; and increased state operating support. To date, Amtrak has been relatively successful at reaching its financial targets by compensating for planned actions that have not materialized. For example, revenues from contracts to provide commuter rail service increased 70 percent more than planned, improved productivity for track maintenance generated 75 percent more savings than planned, and the management staff was reduced 12 percent more than planned. However, Amtrak has reduced its operating loss by about 29 percent through its fiscal year 1995 actions. Even if it were fully successful in implementing the fiscal year 1996 actions, it would reduce the operating loss by only an additional 8 percent—and second quarter revisions reduce this projection to less than 1 percent. If no further actions were taken, Amtrak would still have an operating loss in excess of $850 million in fiscal year 2001. Therefore, to meet its goal of eliminating the need for a federal operating subsidy by fiscal year 2002, Amtrak still needs to substantially increase revenues and significantly improve productivity after fiscal year 1996. The most important factor underpinning Amtrak’s program for achieving its longer-term goal of operating self-sufficiency is that capital funds must be available so that it can make the investments needed to provide attractive and competitive services and thereby significantly increase its revenues. Amtrak plans to invest $5.5 billion by fiscal year 2001 in its systems, equipment, and facilities—$3.2 billion of which is expected to come from federal capital grants. Amtrak’s fiscal year 1996 federal capital grant was $345 million, which slightly exceeded the amount anticipated in its Plan, but in future years Amtrak’s Plan anticipates significantly increased federal capital assistance to allow it to introduce high-speed rail service in the Boston-New York market, bring the Northeast Corridor as a whole to a state of good repair, and upgrade services on other routes. Most of the remaining capital needs are to be met through greater state contributions, increased passenger revenues, and the proceeds from the Northeast Corridor Unit’s planned Power Partnership. These additional moneys are critical to Amtrak as a whole because they are necessary to support the corporation’s planned capital investments in the Northeast Corridor and elsewhere. Revenues have increased 6 percent since fiscal year 1994, and state shares for state-sponsored services are projected to double by fiscal year 1996; but the largest revenue increases are projected to result from electrification and the introduction of high-speed rail service from Boston to New York in fiscal years 2000 and 2001. These improvements alone are projected to increase Amtrak’s total passenger revenues by 21 percent in fiscal year 2000. Though Amtrak did not receive the federal legislative authority that would have allowed it to become a utility broker, it is working state by state to obtain the authority to market electricity carried over its lines, and it continues to project revenues from the Power Partnership. Even though the Power Partnership is not yet in place, the Northeast Corridor Unit estimates that it or other commercial projects will generate $100 million in fiscal year 1997. The unit has not provided any information to support this projection or to demonstrate a backup plan for generating the $100 million, which is already committed to capital improvements. Top management at the Northeast Corridor and Intercity units is systematically monitoring whether specific actions in the Strategic Business Plan are implemented and meeting financial targets; the West Coast Unit’s senior management monitors whether it is operating within its budget and delegates to its department directors the monitoring of whether specific actions have been successfully taken. The Northeast Corridor Unit has established a database that includes the specific actions to be taken, such as introducing self-service ticketing and reducing management staffing, and the monthly projected and actual financial results of each action. This system, supplemented with status reports on individual actions, is used by senior management to monitor the unit’s progress, identify any problems early on, and develop ways to compensate for actions that are not generating the expected results. The Intercity Unit recently implemented a similar system for monitoring the implementation of specific actions, although financial results are not determined for each. The Intercity Unit’s monitoring system focuses on the actions developed to address the fiscal year 1996 projected shortfall (based on the second quarter results) and is also being used to document whether planned actions were actually taken in fiscal year 1995 and the first two quarters of fiscal year 1996. The West Coast Unit’s senior management uses monthly financial and performance reports to monitor whether it is within its budget; department directors are responsible for implementing the Plan’s actions within their jurisdiction and for reporting any problems associated with these actions. Each strategic business unit reports its results monthly to the corporate level, where the results are verified and consolidated into corporationwide monthly and quarterly reports. Amtrak’s success to date in implementing the Strategic Business Plan provides the Congress with a framework for determining the level of capital and operating funds Amtrak will receive. Amtrak’s future progress in implementing its Plan could be critical in determining the continued availability of intercity passenger rail service in the United States and the level of federal support necessary to maintain this service. We provided copies of a draft of this report to Amtrak for its review and comment. We met with Amtrak officials—including the Chief Financial Officer and the Vice President for Government and Public Affairs—who provided comments. Amtrak agreed with the information presented and the observations made throughout the report and considered it a well-prepared, balanced report. Technical comments provided by Amtrak have been incorporated where appropriate. To identify the actions Amtrak plans to take to improve its financial condition, review its progress to date towards achieving improvements and its longer-term goal of operating without a federal operating subsidy, and describe its monitoring of the Strategic Business Plan’s implementation, we obtained and analyzed data from Amtrak. These data included Amtrak’s Strategic Business Plan and the business plans for each unit; internal monitoring reports; and public monthly, quarterly, and annual reports. We also conducted interviews with Amtrak officials at the Northeast Corridor Unit in Philadelphia, Pennsylvania; the Intercity Unit in Chicago, Illinois; the West Coast Unit in Los Angeles and San Francisco, California; and corporate headquarters in Washington, D.C. We conducted our review from September 1995 through June 1996 in accordance with generally accepted government auditing standards. We did not independently verify the accuracy of the data provided by Amtrak. We are sending copies of this report to the Secretary of Transportation; the President, Amtrak; and interested congressional committees. Copies are available to others upon request and are available via the Internet. Major contributors to this report are listed in appendix I. Please contact me at (202) 512-2834 if you or your staff have any questions. John Rose The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed Amtrak's Strategic Business Plan, focusing on: (1) specific planned actions and their expected results; (2) Amtrak's success in achieving financial improvements and its progress toward realizing its long-term goal of self-sufficiency; and (3) Amtrak's efforts to monitor the plan's implementation. GAO found that: (1) by fiscal year (FY) 2001, Amtrak plans to reduce its annual operating loss to about $180 million, which it will offset with funds from sources other than federal subsidies; (2) Amtrak plans to double its revenues and hold operating cost increases to less than 20 percent through FY 2001; (3) Amtrak's actions reduced its FY 1995 operating loss by $171 million, which was $3 million less than expected; (4) Amtrak expects its 1995 loss-reduction efforts to produce $315 million in annual savings beginning in FY 1996; (5) Amtrak planned additional FY 1996 actions to reduce its operating loss by another $61.6 million, but severe winter weather reduced revenues and increased operating costs; (6) if Amtrak reaches its revised FY 1996 goal of a $5.2-million reduction, it will have reduced its operating loss by 30 percent overall; (7) two Amtrak business units are meeting or nearly meeting their goals, but one is not; (8) Amtrak has made progress in its plan's first 18 months, but it is too early to determine whether Amtrak will reach its operating self-sufficiency goal, because success depends on further improvements and realizing certain funding, service, and productivity assumptions; (9) two units' top management monitors implementation of specific plan actions and financial goals, while the third unit focuses on whether it is operating within its budget and makes department directors responsible for implementing and monitoring individual plan actions; and (10) Amtrak prepares monthly and quarterly reports based on monthly unit reports.
In 1991, Congress passed the Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA), which added Section 28 to the Federal Transit Act. ISTEA required FTA to establish a state-managed safety and security oversight program for rail transit agencies. As a result, on December 27, 1995, FTA published a set of regulations, called Rail Fixed Guideway Systems; State Safety Oversight (subsequently referred to as FTA’s rule in this report), for improving the safety and security of rail transit agencies. State oversight agencies were required by the rule to approve transit agencies’ safety plans by January 1, 1997, and security plans by January 1, 1998. As part of the FTA rule, FTA officials stated they incorporated APTA’s 1991 Manual for the Development of Rail Transit System Safety Program Plans to describe steps the state oversight agencies should take in developing the program standards that transit agencies would have to meet. In 1995, at the time of the FTA rule’s publication, 5 of 19 states affected by the FTA rule had oversight programs in place for rail transit safety and security, and no oversight agency met all the requirements in the FTA rule. During the first few years of implementation, FTA worked with states to develop compliant programs that addressed FTA’s requirements. Ten years after FTA promulgated the initial rule, FTA published a revision to it in the Federal Register on April 29, 2005. The FTA rule stated that oversight agencies had to comply with the revised rule by May 1, 2006. The revisions address, in part, the needs of a growing oversight community and NTSB’s recommendations arising from transit accident investigations. For example, according to FTA and NTSB, NTSB found that the initial rule did not include the requirement that oversight agencies verify transit agencies are following safe and secure operating procedures by formally documenting how transit agency employees were performing specific work functions in compliance with the transit agency’s rules and procedures—a process known as “proficiency and efficiency testing.” Thus, the revised rule specifies what the state oversight agency must require of rail transit systems regarding such verification, and incorporates into the regulation material previously incorporated by reference to the APTA manual. Finally, the revised rule included additional information on ensuring rail transit security and emergency preparedness. FTA relies on staff in its Office of Safety and Security to lead the State Safety Oversight program—and hired the current Program Manager in March 2006. This manager is also responsible for other safety duties in addition to the State Safety Oversight program. Additional FTA staff within the Office of Safety and Security assist with outreach to transit and oversight agencies and additional tasks. For example, FTA has devoted a Transit Safety Specialist to the program full time; a Training Manager, Data Analyst, and Safety Analyst are also available to assist on an as-needed basis. FTA regional personnel are not formally involved with the program’s day-to-day activities, though officials from several FTA Regional Offices help address specific compliance issues that occasionally arise at transit agencies. Also, staff in at least one FTA Regional Office have taken it upon themselves to take an active role supporting transit agencies and oversight agencies in meeting the program’s requirements. In addition, regional staff help states with new transit agencies establish new oversight agencies, help new transit agencies create safety and security plans, and have helped facilitate disputes between oversight and transit agencies as needed. However, after a transit system begins operations, the program is primarily managed from FTA’s headquarters office. FTA also relies on contractors to do many of the day-to-day activities ranging from developing and implementing FTA’s audit program of state oversight agencies to developing and providing training classes on system safety. FTA’s rule applies to all states with rail fixed guideway systems operating in their jurisdictions. The FTA rule defines a rail fixed guideway system as any light, heavy, or rapid rail system; monorail, inclined plane, funicular, trolley, or automated guideway that is not regulated by FRA and is included in FTA’s calculation of fixed guideway route miles or receives funding under FTA’s formula program for urbanized areas (49 U.S.C. 5336); or has submitted documentation to FTA indicating its intent to be included in FTA’s calculation of fixed guideway route miles to receive funding under FTA’s formula program for urbanized areas (49 U.S.C. 5336). Figure 1 shows examples of the types of rail systems that are included in the State Safety Oversight program. FTA’s rule states that rail systems that are regulated by FRA, such as commuter railroads, are not considered rail transit agencies and are therefore not subject to its rule. In addition, FRA has oversight authority over the safety of portions of rail transit systems that share track or rights- of-way with the general railroad system. Furthermore, the revised rule’s definition of “rail fixed guideway system” includes systems built entirely without FTA capital funds, but that intend to receive FTA formula funding. Examples of these systems include Houston’s METRORail system and the New Jersey Transit RiverLINE system. Rail transit operations that do not receive FTA formula funds are not subject to oversight through FTA’s program. Las Vegas’ monorail line does not receive FTA formula funds and therefore does not fall within the FTA program. However, some of the rail transit systems—including automated airport people-movers and sightseeing tramways—that are not subject to the FTA program may be subject to state-mandated oversight in certain states. FTA and FRA have different regulatory authority and this has implications for their ability to provide oversight. According to statute, FTA cannot regulate safety and security operations at transit agencies except for purposes of national defense or in cases of regional or national emergency. In addition, FTA does not have safety inspectors. FTA may, however, institute nonregulatory safety and security activities, including safety- and security-related training, research, and demonstration projects. In addition, FTA may promote safety and security through grant-making authority. Specifically, FTA may stipulate conditions of grants, such as certain safety and security statutory and regulatory requirements, and FTA may withhold funds for noncompliance with the conditions of a grant. In relation to the State Safety Oversight program, both the authorizing statute and the FTA rule state FTA may withhold urbanized area program funds from states that do not meet the requirements of the program. For example, FTA invoked this authority and withheld federal funding from two states that failed to meet initial deadlines specified in the FTA rule. FTA withheld approximately $95 million in federal funding from one state for its failure to designate a state safety oversight agency and approximately $2.3 million from another state for failure to meet the FTA rule’s implementation deadlines. FRA has broader jurisdiction over safety regulation than FTA. FRA oversees over 500 freight railroads and over 20 commuter railroads, in addition to Amtrak. According to agency officials, FRA can directly enforce safety statutes or regulations against railroads using a “toolkit” of consequences, which vary in severity and are used to compel rail carriers to comply with safety regulations. Most commonly, FRA will issue a civil penalty, or fine, against a railroad not in compliance with a particular regulation. Depending on the infraction, however, FRA can also issue an emergency order (the strongest response to noncompliance) or it can cite a defect (a minor deficiency that needs to be addressed but is not egregious enough to warrant a fine). FRA officials stated that the agency trains and maintains its own cadre of safety inspectors that are authorized to conduct safety inspections at any time, 24 hours per day and 7 days per week. In addition to these inspectors, FRA manages a program called the State Rail Safety Participation Program which allows states to employ their own FRA- certified inspectors who can enforce FRA regulations. Under the Government Performance and Results Act of 1993 (GPRA), federal agencies should design programs with measurable goals that support the agency’s strategic goals. Congress enacted GPRA to shift agencies’ focus from simply monitoring activities undertaken to measuring the results of these activities. Each agency’s strategic plan is to include a mission statement, a set of outcome-related strategic goals, and a description of how the agency intends to achieve these goals. To measure progress toward the strategic goals, we have previously reported that the agency should also have a plan for collecting data to measure and evaluate program performance. Without measurable goals and evaluation, it is difficult to determine whether the program is accomplishing its intended purpose and whether the resources dedicated to the program efforts should be increased, used in other ways, or applied elsewhere. FTA designed the State Safety Oversight program as one in which FTA, other federal agencies such as DHS, states, and rail transit agencies collaborate to ensure the safety and security of rail transit systems. Under the program, FTA is responsible for developing the regulations and guidance governing the program, auditing state safety oversight agencies to ensure the regulations are enforced, and providing technical assistance and other information; FTA provides funding to oversight agencies in only limited instances under the program. State oversight agencies directly oversee the safety and security of rail transit systems by reviewing safety and security plans, performing audits, and investigating accidents. Rail transit agencies are responsible for developing safety and security plans, reporting incidents to the oversight agencies, and following all other regulations state oversight agencies set for them. In addition to FTA, federal agencies such as FRA, DHS’s Office of Grants and Training, and TSA also have regulatory or funding roles related to rail transit safety and security. FTA officials stated that they used a multi-agency system-safety approach in developing the State Safety Oversight program. Federal, state, and rail transit agencies collaborate to ensure the rail transit system is operated safely; each of these agencies has some monitoring responsibility, either of themselves or another entity. FTA oversees and administers the program. As the program administrator, FTA is responsible for developing the rules and guidance that state oversight agencies are to use to perform their oversight of rail transit agencies. FTA also is responsible for informing oversight and transit agencies of new program developments, facilitating and informing the transit and oversight agencies of available training through FTA or other organizations, facilitating information sharing among program participants, and providing technical assistance. One avenue FTA uses to provide these services is the annual meeting to which all program participants are invited. FTA also calls special meetings and communicates information to program participants via e-mail when applicable. (See fig. 2 showing roles and responsibilities of participants in the State Safety Oversight program.) FTA officials stated they emphasize that components of a risk-management approach to safety and security, such as hazard analysis and risk-mitigation procedures, are included in the program standard that each state oversight agency issues to the transit agencies they oversee. This is consistent with our position that agencies make risk-based decisions on where their assets can best be used, both in transportation security and safety. However, FTA recognizes that only parts of the State Safety Oversight program are risk- based. The parts of the program that are risk-based are the areas where it believes risk management is most applicable to safety and security. These areas are similar to those in which other transportation modes, such as aviation and pipelines, also use risk-based approaches. Areas that are not risk-based would include such things as requiring minimum standards for all transit agencies in the program, no matter their size or ridership. While FTA officials stated that FTA does not inspect transit agencies with regard to safety, it is responsible for ensuring that, through audits and reviews of oversight agency reports, state oversight agencies comply with the program requirements. For example, according to the FTA rule, when a state proposes to designate an oversight agency, FTA may review the proposal to ensure the designated agency has the authority to perform the required duties without any apparent conflicts. FTA has recommended in two instances that a state choose a different agency because the oversight agency that the state proposed appeared to be too closely affiliated with the transit agency and did not appear to be independent. In addition, FTA is responsible for reviewing the annual reports oversight agencies submit to (1) ensure they include all the required information (e.g., descriptions of program resources, and causes of accidents and collisions), and (2) look for industry-wide safety and security trends or problems. FTA also has authority, under the FTA rule, to request additional information from oversight agencies at any time. Furthermore, FTA is responsible for performing audits of oversight agencies to ensure they are complying with program requirements and guidance. FTA audits evaluate how well an oversight agency is meeting the requirements of the FTA rule, including whether or not the oversight agency is investigating accidents properly, if it is conducting its safety and security reviews properly, and if it is reporting to FTA all the information that is required. Finally, FTA does not provide funding to states for the operation of their oversight programs. However, states may use FTA Section 5309 (New Starts program) funds—normally used to pay for transit-related capital expenses—to defray the cost of setting up their oversight agency before a transit agency begins operations. Also, FTA officials stated this year that FTA used a portion of the funding originally designated for FTA audits to pay for one person from each oversight agency to attend training on the revisions to the FTA rule, which oversight agencies had to comply with by May 1, 2006. In the State Safety Oversight program, state oversight agencies are directly responsible for overseeing rail transit agencies. According to the FTA rule, states must designate an agency to perform this oversight function at the time FTA enters into a grant agreement for any New Starts project involving a new rail transit system, or before the transit agency applies for funding under FTA’s formula program for urbanized areas. States have designated several different types of agencies to serve as oversight agencies. Most frequently—in 17 cases—states have designated their departments of transportation to serve in this role, either due to their expertise on rail transportation, or because state officials believed they had no other agencies with transportation expertise. In three instances—California, Colorado, and Massachusetts—states have designated utilities commissions or regulators to oversee rail transit safety and security. Officials from these states stated that since these bodies already had regulatory and oversight authority over utilities in these states, it was a natural extension of their powers to add rail transit to the list of industries they oversee. In fact, the California Public Utilities Commission (CPUC) has been overseeing railroads and rail transit in that state since 1911. The commission has issued and enforces several “general orders” that rail transit agencies in California must follow or face fines and suspended service. Two states have designated emergency management or public safety departments to oversee their rail transit agencies. Officials in one state, Illinois, have designated two separate oversight agencies—both local transportation funding authorities—to oversee the two rail transit agencies operating in the state. In the Washington, D.C. (District of Columbia), region, the rail transit system runs between two states and the District of Columbia. These states and the District of Columbia established the Tri- State Oversight Committee as the designated oversight agency. Finally, one state, New York, has given its oversight authority to its Public Transportation Safety Board (PTSB). PTSB officials said they have authority similar to the public utilities commissions discussed above, but have no other mission than ensuring and overseeing transit safety in New York. See appendix I for further discussion of multi-state operations. Also, see appendix II for a table showing each oversight agency and the rail transit agencies they oversee. The individual authority each state oversight agency has over transit agencies varies widely. While FTA’s rule gives state oversight agencies authority to mandate certain rail safety and security practices as the oversight agencies see fit, it does not give the oversight agencies authority to take enforcement actions, such as fining rail transit agencies or shutting down their operations. However, we found five states where the oversight agencies have some enforcement authority over the rail transit agencies they oversee. In all cases, this was due to the regulatory authority states have granted their oversight agencies. For instance, state utilities commissions may have this authority written into their authorizing legislation. In other instances, states had given this authority to the oversight agency in state legislation. Officials from oversight agencies that have the authority to fine or otherwise punish rail transit agencies all stated that they rarely, if ever, use that authority, but each stated that they believed it gives their actions extra weight and forced transit agencies to acquiesce to the oversight agency more readily than they otherwise might. A majority of oversight agencies, 19 of the 24 with which we spoke, have no such punitive authority, though officials from some oversight agencies stated they may be able to withhold grants their oversight agencies provide to the transit agencies they oversee. Although officials from several of these agencies stated that they believe they would be more effective if they did have enforcement authority, under the current program this authority would be granted by individual states. While the states have designated a number of different types of agencies with varying authority to oversee transit agencies, FTA has a basic set of rules it requires each oversight agency to follow. In the program, oversight agencies are responsible for the following: Developing a program standard that outlines oversight and rail transit agency responsibilities. According to the FTA rule, the program standard “provides guidance to the regulated rail transit properties concerning processes and procedures they must have in place to be in compliance with the State Safety Oversight program.” FTA requirements for the program standard are procedural rather than technical. For example, the program standard must include, at a minimum, areas dealing with the oversight agency’s responsibilities, how the program standard will be modified, how the oversight agency will oversee the transit agency’s internal safety and security reviews, how the oversight agency will conduct the triennial audits, and requirements for the rail transit agency to report accidents. According to FTA, oversight agencies may choose to develop technical standards, such as requirements for the strength of track, crashworthiness of rail vehicles, or brightness of signals. In addition, the standard must contain sections describing how the oversight agency will investigate accidents, how the rail transit agency will develop a corrective action plan to address investigation and audit findings, and the minimum requirements in the agency’s separate safety and security plans. FTA mandates that the transit agency’s safety plan must include, among other requirements, a process for identifying, managing, and eliminating hazards. Similarly, FTA mandates that the transit agency’s security plan must include, among other requirements, a process for managing threats and vulnerabilities, and a method for conducting internal security reviews. Reviewing transit agencies’ safety and security plans and annual reports. FTA requires oversight agencies to review and approve these plans and reports of their safety and security activities to ensure they meet the program requirements. Conducting safety and security audits of rail transit agencies on at least a triennial basis. FTA requires oversight agency officials to audit the rail transit agencies’ implementation of their safety and security plans at least once every 3 years. We found one oversight agency that performed this audit on an annual basis. In addition, we found five others that perform the audit on a continuous basis, auditing the rail transit agency on a portion of their safety and security plans each year. FTA has approved both these alternative auditing schedules. Tracking findings from these audits to ensure they are addressed. FTA requires oversight agencies to establish a process for tracking and approving the disposition of recommendations from the triennial audits. Oversight agencies must also have a process for tracking and eliminating hazardous conditions that the transit agency reports to the oversight agency outside the audit process. Investigating accidents. FTA requires oversight agencies to investigate accidents on the rail system that meet a certain damage or severity threshold and develop a corrective action plan for the causes leading to the accident. Oversight agencies may hire a contractor or allow the transit agency to conduct the investigation on its behalf. Submitting an annual report to FTA. According to the FTA rule, oversight agencies must submit an annual report to FTA detailing their oversight activities, including results of accident investigations and the status of ongoing corrective actions. Under the FTA rule, rail transit agencies are mainly responsible for meeting the program standards that oversight agencies set out for them. However, the FTA rule also lays out several specific requirements that oversight agencies must require transit agencies to follow, such as developing separate system safety and security plans, performing internal safety and security audits over a 3-year cycle, developing a hazard-management process, and reporting certain accidents to oversight agencies within 2 hours. FTA also requires that these requirements are included in each oversight agency’s program standard. The locations and types of transit agencies participating in the program are shown in figure 3. In addition to FTA, the state oversight agencies, and the rail transit agencies, other governmental agencies have some role in ensuring the safety and security of rail transit systems. One of these agencies is DHS’ TSA. The Aviation and Transportation Security Act (ATSA), passed by Congress in response to the September 11, 2001, terrorist attacks, gave TSA authority for security over all transportation modes, including authority to issue security regulations. While TSA’s most public transportation security duties are its airport screening and aviation related activities, TSA has taken steps to enhance rail transit security. For example, in May 2004, TSA issued security directives to rail transit agencies to ensure all agencies were implementing a consistent baseline of security. Also, TSA has hired 100 rail security inspectors, as authorized by Congress. While the exact responsibilities of the inspectors are still being determined, a TSA official stated that they will monitor and enforce compliance with the security directives by passenger rail agencies, as well as increase security awareness among rail transit agencies, riders, and others. The inspectors have begun outreach activities with rail transit systems aimed at enhancing security in rail and mass transit systems. TSA officials stated their responsibilities encompass the security of other rail systems, including freight rail, which is consistent with ATSA. In contrast to the enforcement role of TSA, the Office of Grants and Training within DHS’ Preparedness Directorate, plays a role in ensuring rail transit security through supporting security initiatives. The Office of Grants and Training (formerly known as the Office of Domestic Preparedness) is the primary federal source of security funding for passenger rail systems, and is the principal component of DHS responsible for preparing the United States for acts of terrorism. In carrying out its mission to prevent, prepare for, and respond to acts of terrorism, the Office of Grants and Training provides training, funds for the purchase of equipment, support for the planning and execution of exercises, technical assistance, and other support to assist states, local jurisdictions (such as municipalities and transit agencies), and the private sector. The Office of Grants and Training has provided over $320 million to rail transit providers through the Urban Area Security Initiative and Transit Security Grant Program. In addition to FTA, another DOT agency, FRA, plays a role in ensuring transit agencies operate safely. In general, FRA exercises its jurisdiction over parts of a rail transit system that share track with the general railroad system, or places where a rail transit system and the general railroad system share a connection (e.g., a grade crossing). Rail transit systems that share track or grade crossings—or are subject to FRA regulations for other reasons—may apply to FRA for a waiver from these rules. According to FRA, if a rail transit vehicle were to operate on the same tracks and at the same time as general railroads, FRA would make the rail transit agency operating the vehicle meet the safety standards of the general railroads. This would likely require rail transit agencies to use much sturdier (and more expensive) vehicles, and could be cost-prohibitive for the rail transit agencies. Therefore, 11 rail transit agencies have requested waivers from FRA based on the fact that their trains operate at different times than heavy freight trains, and will not be on the track at the same time, meaning the risk of collision is low or non-existent. According to an FRA official, as of June 2006, FRA granted waivers to 10 of the 11 rail transit agencies that applied for them. After granting a waiver, FRA stays in contact with FTA and the relevant transit and oversight agencies to address any safety questions or problems that arise. NTSB also plays a role in enhancing and ensuring rail transit safety, though it has no formal role in FTA’s oversight program. NTSB has authority to investigate accidents involving passenger railroads, including rail transit agencies. Rail transit agencies must report to NTSB, within 2 hours, all accidents involving fatalities, multiple injuries, evacuations, or damage over certain monetary thresholds. NTSB officials stated they generally will investigate only the more serious accidents, such as those involving fatalities or injuries, or those involving recurring safety issues. Often, NTSB accident investigations of rail transit accidents will result in recommendations to federal agencies or rail transit agencies to eliminate the condition that led to the accident. NTSB has no power to enforce its recommendations, but NTSB states that, historically, agencies have implemented over 80 percent of its recommendations. NTSB also maintains expertise on transportation safety across all modes of transport and conducts studies on pressing issues. Rail transit agencies and FTA both stated that they consult NTSB periodically when they have safety questions, in addition to reporting accidents to it. The majority of officials from transit and oversight agencies with whom we spoke agreed that the State Safety Oversight program improves safety and security in their organizations. These officials provided illustrations about how the program enhanced safety or security; however, they have limited statistical evidence that the oversight program improved safety or security. FTA has obtained a variety of information on the program from sources such as national transit data, annual reports from oversight agencies, and its own audits of the oversight agencies. FTA has used national transit data and oversight agencies’ annual reports to collate information on safety, including information about fatalities and the causes of incidents; FTA last issued a report summarizing this information in 2003. However, this data is not linked to any program goals or performance measures. FTA officials recognize the need for performance measures for its safety and security programs and are taking steps in 2006 to begin to address this need. Finally, although FTA expected to audit the oversight agencies every 3 years, it has not conducted these audits as frequently as it had planned (it has conducted eight since September 2001). However, program officials stated they are committed to getting “back on track” to meet the planned schedule. Ensuring that FTA devotes enough resources to conduct the planned audits, and develops and uses planned performance goals and measures to improve the program, will be important for future assessments of the program. Both transit agency and oversight agency officials state that FTA’s State Safety Oversight program is worthwhile and valuable because it helps them maintain and improve safety and security. Of the 37 transit agency officials with whom we spoke, 35 believe the program that oversees their safety and security is worthwhile. Several officials stated that it is important and beneficial to have an independent agency verify their safety and security progress. One transit agency official explained that the oversight agency helps transit officials to identify larger, or systemic, issues. In addition, the program provides support to exert extra influence on a transit agency’s board of directors or senior management to get safety or security improvements implemented faster. Furthermore, officials identified specific examples illustrating how oversight agencies helped improve safety or security. Officials from 15 transit agencies explained that the program helped modify equipment to improve safety and security. For example, one transit agency had problems with train operators failing to stop at red light signals. The oversight agency helped the safety department exert enough influence with the transit agency’s senior management to replace all signals with light-emitting diode (LED) signals that were brighter and more visible. Finally, transit agency officials believe that FTA’s program is an effective method for overseeing safety and security. Several officials said that having a state or local (rather than national) oversight agency facilitated ongoing safety and security improvements and consistent working relationships with the oversight staff. In addition to transit agency officials, officials from 23 of the 24 state safety oversight agencies with whom we spoke believed that the State Safety Oversight program is valuable or very valuable for improving transit systems’ safety and security. Several officials commented that the program provides an incentive to examine safety and security issues and avoid complacency. It also helps the transit agencies by providing an independent third party, since self oversight is not, in the officials’ view, the best way to have an agency identify and resolve its safety and security issues. Furthermore, several officials commented that they believed the current system worked well, and that the program provides consistency and endows the state safety oversight agencies with enough authority to accomplish their tasks. Also, officials said that having the states carry out the program provides ongoing oversight in addition to formal audits, which helps maintain a constant oversight of safety and security issues. Furthermore, some transit and oversight agency officials stated that, because they were subject to oversight, they believed they saw improved safety statistics for their rail system. For example, CPUC provided safety statistics showing an 87 percent drop in rail transit collisions at the San Francisco Municipal Railway (MUNI) from 1997, when the CPUC became its oversight agency, to 2005. Although FTA changed its definition of a reportable accident during this time period—making it impossible to determine exactly what impact external oversight had on MUNI safety— both MUNI and CPUC staff stated they were confident CPUC’s efforts had been a major factor in the reduction in accidents. A MUNI representative estimated that the reduction in accidents was more likely about 15 percent, but stated that CPUC oversight led MUNI to develop a comprehensive safety program, which helped reduce accidents and increase the agency’s focus on meeting safety goals. In another example, New York oversight officials stated that, in the late 1970s and early 1980s, fires were prevalent in New York City’s transit system. After the New York State Legislature created the PTSB in 1984 to oversee public transportation safety in New York, the PTSB tracked incident numbers, approached the transit agency and, according to oversight and transit agency officials, was able to develop and implement an action plan which heightened the awareness of (and ultimately improved) the situation. Since these efforts occurred several decades ago, the data that might support the officials’ statements were not easily accessible today; however, FTA, PTSB, and New York City transit officials all cited this as an early success of state oversight of rail transit. APTA officials with whom we spoke stated that, although the State Safety Oversight program contains minimum requirements for safety and security, the previous industry-regulated approach encouraged industry officials to surpass minimum standards and continue striving for improved safety and security. However, transit officials with whom we spoke often discussed the benefits of a federal program. For example, a transit agency official explained that a benefit of FTA’s rule is that it standardized rail transit safety and security across the country. In addition, officials from 17 transit agencies reported that respective state safety oversight agencies imposed requirements above those required in FTA’s requirement. For example, three state safety oversight agencies reported that they require transit agencies under their purview to have an “hours of service” type policy which requires minimum time off duty for train operators to rest. In addition, several oversight agencies have established more stringent reporting and notification requirements than required by FTA. For example, officials from two transit agencies reported that their oversight agencies require them to report accidents occurring in a rail yard, while two others stated that their oversight agencies require notification of any accident involving contact between vehicles, no matter how minor. One potential source of information about the State Safety Oversight program’s impact on safety and security are data that FTA collects through the annual reports it requires state oversight agencies to submit. The reports include information on many different issues, including program resources, accidents, fatalities, injuries, hazardous conditions, and any corrective actions taken resulting from audits or accident investigations. FTA officials stated they have used the oversight agency information, as well as national transit data, to publish its own annual reports from 1999 to 2003. FTA’s reports included ridership data, fatality and injury data, and the results of accident investigations to identify common incident causes. Although these reports may have informed oversight agencies about what safety or security problems existed, the information was not tied to any program goals or performance measures. In addition, it has not issued a report since 2003. Although the reports provide data on transit safety, it is unclear how oversight agency officials use this data. For example, one state safety oversight official with whom we spoke recommended that FTA provide more extensive analysis of the accident data it receives from oversight agencies. He stated that analyses of such data could identify trends and help oversight agencies develop a more cooperative and collegial relationship with each other. According to program officials, FTA has recognized the need for better information and performance measures for its safety and security programs; also, it has not published a report since 2003 because it has been looking for ways to improve the type of safety and security data it can collect, and how it can use the information to track program performance and progress toward as-yet-undefined goals. FTA’s 2006 business plan for its Safety and Security Division includes a goal to continue developing and implementing a data-driven performance analysis and tracking system to help ensure management decisions are informed by data and focused on performance and accountability. As part of these efforts, FTA is working with a contractor to develop performance measures for the State Safety Oversight program. FTA officials stated that their contractor is working with oversight and transit agencies to identify measures that they use and find useful in tracking the safety and security of their systems. Although it may be difficult to identify such measures—many of the oversight agencies with whom we spoke do not have performance measures, either—this effort could allow FTA to more readily determine areas where the program is having a positive impact on transit safety and security, and areas where more focus is needed. Another source of information is the audits of the oversight agencies that FTA attempts to conduct every 3 years. Although the audits provide detailed information on specific oversight agencies, FTA has not brought together information from these audits to provide information on the safety and security of transit systems across the country. FTA tracks the deficiencies and areas of concern, and follows up with oversight agency staff to assure that each state safety oversight agency resolves the suggested corrective actions. Furthermore, FTA has not conducted the audits frequently enough to provide a current picture of transit system safety and security, or to identify some challenges that oversight and transit agency officials raised during our interviews with them. FTA has audited each state safety oversight agency that existed prior to 2004 at least once since the program began; two agencies were audited twice. According to the FTA contractor, they piloted the audit program in late 1998 by conducting audits in three states with different legal authorities and a range of differently sized rail transit agencies. Regularly scheduled audits began in 1999. However, FTA largely discontinued the audit program after the September 11, 2001, terrorist attacks and acknowledged that the agency’s priorities shifted in the wake of the attacks. FTA has audited 8 of 24 existing oversight agencies since September 2001. However, during that time period, FTA also conducted nine security and safety reviews to evaluate whether new rail transit projects could enter operations safely and securely. In addition, the program had several staffing changes after 2001, causing some oversight and transit officials to state that the program did not seem to be a priority for FTA. According to FTA officials, including the Program Manager, who started in February of 2006, FTA is not conducting audits in fiscal year 2006 so it can use the money and time to help states comply with the revised rule; FTA has planned a detailed outreach effort to this end, including a workshop for oversight agency officials to help ensure compliance. FTA plans to return to its triennial audit schedule in fiscal year 2007, with 10 audits scheduled. FTA plans to begin with the states that it has judged to have had the weakest program standards and procedures, based on their initial submission under the new rule. Despite the program’s popularity with participants, FTA faces challenges in implementing the program’s revised rule and continuing to manage the program. First, several oversight agency officials stated they are not confident they have adequate numbers of staff to effectively oversee rail transit system safety and security and they are unsure the current training available to them is sufficient. Also, we found the level of staffing and expertise of oversight agency staff varies widely across the country. A second challenge FTA faces in implementing the program is that many transit and oversight agency personnel are confused about how security issues in the program will be handled, and what agencies will be responsible for what actions, as TSA takes on a greater role in rail transit security. While a clear majority of both oversight and transit agency officials with whom we spoke endorsed the usefulness of the State Safety Oversight program, many of these same officials stated that they were unsure that they were adequately trained for their duties. Specifically, 18 of 24 oversight agencies with whom we spoke stated they believed additional training would help them provide more efficient and effective safety and security oversight. We found that the level of expertise of oversight agency staff varied widely across the country. For example, we found that 11 of the 24 oversight agencies we examined had oversight staff that had no career or educational background in transit safety or security. Conversely, another 11 oversight agencies required their staff to have certain levels of experience or education. For example, while New York’s PTSB requires its staff to have 5 years of experience in transit safety, the Massachusetts Department of Telecommunications and Energy requires its lead oversight staff person to have an engineering degree. According to some oversight agency officials who had no previous transit safety or security background, they had to rely on the transit agency staff they were overseeing to teach them about transit operations, safety, and security. Therefore, it took them several years before they were confident that they knew enough about rail transit operations to provide effective oversight. These officials stated that if they left their positions, any new staff taking over for them would face a similar challenge. Most oversight agency staff believe they are doing a good job and are helping transit agencies operate more safely and securely through overseeing their operations, but several cite the lack of a training curriculum for oversight staff as a challenge to their effectiveness. Officials from some of the 18 agencies who stated additional training would be useful cited several examples of how additional training could benefit them. For example, officials from eight oversight agencies stated that the training they had received in transit operations, accident investigations, and other areas was beneficial, but they had not received any training on how to perform specific oversight functions. Thus, they were unsure how to carry out their agencies’ primary oversight role. Officials at a majority of oversight agencies (15 of 24) stated that they felt the training that had been made available to them either by FTA, the Transportation Safety Institute (TSI), or the National Transit Institute had been adequate. However, officials from 17 of 24 oversight agencies also stated that they were somewhat unsure of which courses they should take to be effective in their oversight role. For example, several oversight agency personnel stated that, while FTA officials have encouraged oversight agencies’ staff to obtain certifications from TSI in transit safety and security and have encouraged oversight agency staff to take selected TSI courses, FTA officials have not developed or recommended a course specifically related to oversight. Furthermore, although FTA provides training to state oversight agency staff (either on their own or through TSI), and encourages state oversight agencies to seek training opportunities, FTA does not pay staff to travel to these courses. Also, oversight agencies must pay their own tuition and travel expenses for courses not provided by FTA or TSI. Officials from 10 of the 24 oversight agencies with whom we spoke cited a lack of funds as one reason why they could not attend training they had hoped to attend. Also, officials from all 24 oversight agencies stated that, if FTA provided some funding for them to travel to training or paid tuition for training they wanted to attend, it would allow the oversight agencies to spend their limited resources on direct oversight activities, such as staff overtime, travel expenses to visit transit agencies, or hiring contractors. Several oversight agency officials also cited the example of other DOT agencies that provide free training or pay for state staff to travel to attend training. For example, 30 states participate in FRA’s State Safety Participation Program. These states have inspectors who FRA has certified to enforce FRA safety regulations. FRA pays for their initial and ongoing classroom training and state staff’s travel to this training. In addition, the federal agency regulating pipelines, PHMSA, authorizes state-employed inspectors to inspect pipelines in many states. To help defray their costs, PHMSA provides up to 50 percent of a state’s expenses in carrying out their pipeline safety program. PHMSA also recently paid for two inspectors from each state to attend training when it instituted a new inspection approach. Officials from both FRA and PHMSA stated that providing funding to states to train their employees helps federal agencies more effectively carry out their enforcement activities, easing the states’ burden of paying to enforce federal regulations. For the first time, FTA paid for oversight agencies’ personnel to travel to attend a special meeting in June 2006 in St. Louis, where FTA provided technical assistance and shared best practices in meeting the requirements of the revised rule. This instance could provide a model for future funding of training or training-related travel for oversight agency personnel. FTA officials noted that the agency has provided considerable training in transit safety and security through TSI and through the State Safety Oversight program annual meeting, which includes a discussion of best practices and exchanges of information between oversight agencies. However, FTA officials agree that they have not provided training specifically pertaining to oversight activities, or provided a recommended training curriculum to oversight agencies; officials stated that it would not be difficult to take these steps in the future. Also, FTA officials told us that they considered addressing the lack of consistency in oversight agency staff qualifications when they were revising the FTA rule in 2005. However, they stated they did not have the legal authority to direct states to require certain education, experience, or certifications for oversight agency staff. Also, these officials stated that FTA has not issued any guidance to states about what level of training is appropriate for oversight staff or what level of staffing is appropriate for an oversight agency. However, these officials noted that, despite the lack of formal guidance, FTA checks to ensure oversight agency personnel are adequately trained during its audits; in five instances, FTA has recommended that oversight agency staff take additional training. FTA officials also stated that FTA could issue informal guidance or recommendations to oversight agencies about the level of training their oversight staff should have. In addition to concerns about training, oversight agencies were unsure about whether they had sufficient numbers of staff to adequately oversee a transit agency’s operations. Specifically, officials at 14 of 24 oversight agencies with whom we spoke stated that more staff would help them do their job more effectively. We spoke with some oversight agency personnel who were highly dedicated to performing oversight, even though they said they had no assistance and their states had limited resources to allocate to the task. Some staff took it upon themselves to stay informed about a transit agency’s operations by staying in regular contact with transit agency personnel, attending transit agency safety meetings, and making regular inspections of the system—even though these tasks were not required by their oversight agencies. However, officials from 11 oversight agencies told us they had devoted the equivalent of less than one person working half- time to oversight duties, and, in some cases, described the oversight part of their job as a “collateral duty.” Personnel from some of these oversight agencies told us they simply did not have time to perform the kind of active oversight that involved attending transit agency meetings, making spot inspections, and staying in regular contact with transit agency personnel. While in some of these instances, the transit agencies overseen are small, such as small streetcar lines, some of the transit agencies with the highest ridership levels have similar levels of oversight. For example, one state that estimated it devotes 0.1 full-time equivalent (FTE) to oversight program functions is responsible for overseeing a major transit agency that averages nearly 200,000 daily passenger trips. This state supplements its staff time with the services of a contractor, mainly to perform the triennial audits of the transit agency. Also, one state that estimated devoting 0.5 FTE to oversight functions is responsible for overseeing five transit agencies (including two systems not yet in operation) in different cities. The oversight staff in this state reported that it was difficult to maintain active oversight when their responsibilities were so spread out. Furthermore, we found 13 oversight agencies that estimated dedicating less than one full- time equivalent staff member to the oversight task. This meant that the person (or persons) assigned to the oversight tasks had other duties, in addition to oversight of a transit agency. Table 1 shows the amount of personnel oversight agency representatives estimated their agencies dedicate to oversight responsibilities. (See app. II for information on estimated FTE and transit system information for each state safety oversight agency and related transit agency). Although it is up to states to determine the resources allocated to this program, providing appropriate and continuing training and experience may increase the effectiveness of the limited staff states have to dedicate to this program. Another challenge facing the program is how the emergence of TSA and its rail inspectors might affect oversight of transit security. As discussed above, TSA now has full regulatory authority over transportation security across all modes, and TSA officials stated the agency has hired 100 rail inspectors, whose stated mission is to, among other duties, monitor and enforce compliance with rail security directives TSA issued in May 2004. However, of the officials at 24 oversight agencies with whom we spoke, 20 stated they did not have a clear picture of who was responsible for overseeing transit security issues. Similarly, officials at 14 of 37 transit agencies were also unsure of lines of responsibility regarding transit security oversight. Several state oversight agencies were particularly concerned that TSA’s rail inspectors would be duplicating their role in overseeing transit security. One oversight agency official stated that he felt transit agencies could begin to experience “audit fatigue” if both TSA and oversight agencies audited transit agencies’ security practices. This official stated it would be more efficient if TSA and oversight agency staff audited transit agencies’ security practices at the same time. Officials at several transit agencies were also confused about what standards they would be required to meet. For example, while oversight agencies are free to create their own standards, TSA issued rail security directives in May 2004—and could issue future directives or requirements that transit agencies must meet. Security officials at one transit agency specifically voiced concern that there could be conflicting security requirements and hoped that TSA would coordinate with oversight agencies’ requirements and vice versa. TSA staff reported hearing similar comments from oversight agencies at a meeting they jointly hosted with FTA for oversight agencies in May 2006. FTA program staff and TSA rail inspector staff both stated that they were committed to avoiding duplication in the program and communicating their respective roles to transit and oversight agency officials as soon as possible. However, as TSA is still developing their program, currently there is no formally defined role for TSA in the State Safety Oversight program, and TSA has not determined the roles and responsibilities for their rail inspectors. While the FTA rule discusses requirements for a transit agency’s security plan (e.g., a method for conducting internal security reviews and a process for determining security threats and vulnerabilities to a transit agency), and requires oversight agencies to include security performance in their audits of transit agencies, the FTA rule does not discuss TSA’s specific role in the program; both TSA and FTA officials stated that exactly how TSA would participate in the program was still to be determined. However, TSA and FTA officials both stated they are committed to working together to ensure inspection activities are coordinated to foster consistency and minimize disruption to rail transit agency operations. Also, a TSA regional manager of rail inspectors with whom we spoke was unsure what the rail inspectors’ role would be in relation to the program; in addition, the manager was unsure what the details of the program were, including the identity of the relevant oversight agencies for the region. However, he stated that he was working to learn these details, and that he and his staff had been in touch with transit agency security officials to introduce themselves and gather information. Furthermore, in May 2006, after we had finished our interviews with transit and oversight agency staff, TSA staff stated that they were engaged in an ongoing dialog with FTA and oversight agencies, to determine how the rail inspectors could best assist oversight agencies in reviewing transit agency security. TSA gave several examples of activities resulting from this coordination. For example, TSA reported that they had designated 26 rail inspectors as liaisons to state oversight agencies. Also, TSA officials stated that they are working with FTA and the oversight agency from California to pilot a coordination approach they could use with oversight agencies across the country. Additionally, the director of the rail inspector program attended a meeting with representatives from almost all oversight agencies to discuss the concept of rail inspectors participating in oversight-agency audits of transit agencies. Finally, TSA is working to bring the 26 TSA rail inspectors involved in the State Safety Oversight program to the next annual meeting for the program’s participants, so that the rail inspectors can learn more about the program and develop a “game plan” for how the inspectors will participate in the program. While FTA faces several challenges with the State Safety Oversight program, most participants in the program consider it a success at improving rail transit safety and security; nearly all participants can cite anecdotal evidence suggesting the program has had a positive impact. Although FTA collects data on safety and security from transit and oversight agencies, FTA has not developed a framework for demonstrating the impact the program has had on rail transit safety and security. As new leadership takes over administration of the program, this is an opportune time for FTA to determine how to assess the impact of the program, including determining a way to measure the impact of the program, setting performance goals, and developing and providing the means to meet a consistent schedule of auditing oversight agencies. Second, state oversight agencies have inconsistent training and qualifications for oversight staff across the United States, although it is unclear what impact, if any, this has had on rail transit safety and security. In other federally mandated transportation safety programs where states partner with the federal government to perform oversight duties, the federal government pays for a portion of the training expenses of oversight staff, (or for oversight staff to travel to attend training) because having well-trained state officials makes the federally mandated oversight more efficient and effective. Yet, in this program, FTA relies entirely on the states to determine how to fund their direct oversight of rail transit agencies and does not help defray their training or travel costs. While the program is generally thought of as bringing about positive change, these differing levels of training and qualifications are a cause for concern; it is conceivable that inadequately trained staff, especially staff that have no experience overseeing transit agency safety, might miss safety problems they otherwise would notice, or may be unable to effectively evaluate a transit agency’s proposals for resolving existing safety problems (though it is not clear whether either of these have occurred). One way to help ensure that oversight agency staff have at least a basic understanding of how to oversee rail transit operations would be to evaluate the amount of training oversight agency staff have obtained and, subsequently, develop a training curriculum that FTA could recommend to oversight agency personnel. Also, since many oversight agency personnel have little experience conducting rail transit safety oversight, including the basic tenets of conducting oversight in the training curriculum would help ensure that oversight agency staff did not have to rely on transit agency personnel for advice on conducting oversight. In addition, FTA could review oversight staff qualifications in more detail during its audits of oversight agencies to help ensure oversight staff are adequately trained to perform their duties. Lastly, many transit and oversight agency staff are concerned that the existence and deployment of TSA’s rail inspectors will complicate security oversight. While TSA and FTA are undertaking several efforts to coordinate their activities and determine the roles and responsibilities of the rail inspectors, the official role of the rail inspectors in the State Safety Oversight program remains unclear. Therefore, it is understandable why transit and oversight agency officials fear possible duplication of effort, especially for activities such as reviewing security plans and auditing transit security practices. Also, since TSA and DOT agencies have had some difficulties coordinating their actions in the past, such concern is warranted, though FTA and TSA statements promising to address this issue, and their recent activities in this direction, are a positive step. In order to assure that FTA devotes an appropriate level of staff resources to the State Safety Oversight program, obtains sufficient information to evaluate the performance of the program, and supports state oversight agencies in adequately training their staff to perform their oversight duties, we recommend that the Secretary of Transportation take the following two actions: Direct the Administrator of FTA to take advantage of the opportunity presented by having new program leadership to set short- and long-term goals for the program, along with measures to ensure that the program is making progress toward meeting those goals; develop performance goals for the agency’s other approaches for evaluating the impact of this program on safety and security; and develop a plan for maintaining FTA’s stated schedule of auditing oversight agencies at least once every 3 years. Direct the Administrator of FTA to assess whether oversight agency personnel are receiving adequate amounts of training to perform their activities effectively and, based on the results of this assessment, work with oversight agencies to develop a strategy to address any deficiencies they identify. This strategy should include developing an appropriate training curriculum, including training on conducting oversight for oversight agency staff and guidance to oversight agencies encouraging them to have their staff complete the training curriculum. If FTA determines that it does not have the authority to issue such guidance, it should seek such statutory authority from Congress. Furthermore, to reduce confusion among transit and oversight agencies about the role of TSA in transit security oversight and reduce the potential duplication of effort that would inconvenience transit agencies, we recommend that the Secretary of Homeland Security direct the Assistant Secretary of TSA to: coordinate with the Administrator of FTA to clearly articulate to state oversight agencies and transit agencies the roles and responsibilities TSA develops for its rail inspectors; and work with state oversight agencies to coordinate their security audits whenever possible and include FTA in this communication to help ensure effective coordination with these agencies. Officials from FTA, TSA, and NTSB provided oral comments on a draft of this report through their respective liaisons. The agencies concurred with the report. Furthermore, FTA and TSA officials stated that they are working to determine how to implement the recommendations. Finally, TSA provided a technical comment which we included in the report. We are sending copies of this report to interested congressional committees, the Acting Secretary of Transportation, and the Secretary of Homeland Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. Three rail fixed guideway transit systems in the United States—the Port Authority Transit Corporation (PATCO) in Philadelphia, MetroLink in St. Louis, and the Washington Metropolitan Area Transit Authority (WMATA) in Washington, D.C.—cross state lines. Therefore, these systems require the collaboration of multiple oversight agencies to run the State Safety Oversight program or states can agree that one state will be responsible for oversight of the transit system. Each of these multi-state transit systems has a different structure to handle oversight responsibilities. The oversight programs in Philadelphia and St. Louis have both developed strategies to centralize decision making, streamline collaboration, and respond promptly to safety and security audit findings. In contrast, the Tri-State Oversight Committee (TOC), which serves as the oversight agency in the D.C. area, requires majority decision making by the six committee members of the agency, including at least one member from each jurisdiction, and has experienced difficulty obtaining funding, responding to Federal Transit Administration (FTA) information requests, and ensuring audit findings are addressed. Each multi-state oversight program varies in structure, and each performs oversight responsibilities differently. In Philadelphia, authority to serve as the oversight agency was delegated to one of the two state agencies— namely, the Pennsylvania Department of Transportation (PennDOT) agreed to allow the New Jersey Department of Transportation to serve as the sole oversight agency for the PATCO heavy rail transit line. MetroLink in St. Louis is subject to oversight from both Illinois (through the St. Clair County Transit District) and Missouri (through the Missouri Department of Transportation); the two organizations share oversight duties. Finally, TOC, which is composed of multiple representatives from each jurisdiction (including Virginia, Maryland, and Washington, D.C.) provides oversight for WMATA. The PATCO Speedline is a heavy rail line serving about 38,000 riders daily and links Philadelphia to Lindenwold, New Jersey. Most of PATCO’s track is in New Jersey, and 9 of the 13 stations are in New Jersey. Until early 2001, safety and security oversight functions were shared by Pennsylvania and New Jersey through the Delaware River Port Authority (DRPA), a regional transportation and economic development agency serving both southeastern Pennsylvania and southern New Jersey. When DRPA implemented organizational and functional changes, DRPA and PATCO leadership no longer believed that DRPA could perform its role as the designated oversight agency without facing conflicting interests. As a result, Pennsylvania and New Jersey agreed to have the New Jersey Department of Transportation (NJDOT) replace DRPA as the oversight agency. This arrangement allows the oversight agency to take corrective action without seeking additional levels of approval from Pennsylvania, although the oversight agency does keep Pennsylvania informed of its activities. Also, Pennsylvania provides some support to the NJDOT by having PennDOT perform oversight functions for the stations, passageways, and concourses located in Pennsylvania. PennDOT reports any deficiencies or hazardous conditions that may be noted during the performance of oversight directly to New Jersey. Through meetings or other means of communication, the follow-up actions may be performed by the Pennsylvania oversight agency in a supporting role or directly by New Jersey. New Jersey currently devotes two full-time and one part-time staff members to its oversight program, and while these staff members must oversee several transit systems, including PATCO, their sole responsibilities for safety and oversight functions. The St. Louis MetroLink is a light rail line between Lambert-St. Louis International Airport, in St. Louis, and Scott Air Force Base outside Shiloh, Illinois. Service was initiated in 1993, at which time the system included about 16 miles of track in Missouri and about 1.5 miles of track in Illinois. Because so little track was in Illinois, Illinois officials agreed to allow the Missouri Department of Transportation to provide safety and security oversight for the entire system. However, in 2001, MetroLink opened a 17.4- mile extension in Illinois, which roughly equalized the amount of track in both states. Because of this, the states agreed that it was appropriate for Illinois to play a greater role in safety and security oversight, and Illinois designated the St. Clair County Transit District as its oversight agency. St. Clair is one of the few non-state-level agencies to be an oversight agency. The involvement of two separate oversight agencies could create challenges to effective implementation, but the agencies have taken steps to ensure close coordination. First, the Illinois and Missouri oversight agencies have agreed to use only one uniform safety and security standard across the entire MetroLink system. According to area officials, this arrangement creates consistency throughout the system and allows both agencies to perform their oversight functions in a consistent manner. In addition, the agencies use a single contractor who is responsible for the triennial audit. All other work is performed by the Illinois and Missouri oversight agencies. Finally, staff from the two oversight agencies coordinate very closely and each have centralized leadership. Specifically, there is one employee in Missouri who devotes 90 percent of his time to safety and security oversight activities. Illinois has several employees who devote smaller percentages of their individual time to the program, but the Managing Director is primarily responsible for coordinating with Missouri. MetroLink, in turn, indicated that responding to state safety oversight directives is a priority, and the agency works quickly to implement changes. WMATA operates a heavy rail system within Washington, D.C. (the District of Columbia), Maryland, and Virginia. The states and the District of Columbia decided to carry out oversight responsibilities through a collaborative organization through the TOC. TOC is composed of six representatives—two each from Maryland, Virginia, and the District of Columbia. All of the representatives have other primary duties, and their activities on TOC are collateral to these other daily duties, as is the case with staff at several other oversight agencies. TOC does not have any dedicated staff, and TOC members have limited rail operational experience. To gain access to additional experience and expertise in rail oversight, TOC contracts with a consultant to provide technical knowledge, perform required audits of WMATA, and ensure that audit recommendations are completed. In addition, TOC funding comes from, and must be approved by, each of the jurisdictions every year. The Washington Council of Governments processes TOC funds and handles its contracting procedures. These issues result in a lengthy process for TOC to receive its yearly funding and process its expenses. The State Safety Oversight programs in Philadelphia and St. Louis have attempted to streamline decision making, while TOC has a more collaborative process. Philadelphia and St. Louis have both developed strategies to centralize decision making and streamline collaboration, albeit through different structures. Because Pennsylvania granted New Jersey the authority to act as the oversight agency for all of PATCO’s territory, PATCO only has to interact with one oversight agency’s staff. New Jersey also has in-house staff dedicated to the State Safety Oversight program, which helps to ensure continuity, facilitates communication, and provides PATCO with one set of contacts to work with on the implementation of any new safety or security processes. Although St. Louis has two agencies providing safety oversight, both oversight agencies have made it a priority to ensure that they are providing consistent information to the transit agency, and they are coordinating activities so MetroLink is not burdened by multiple contacts about the same issue. To do this, the Missouri and Illinois representatives stay in close contact with each other. Both oversight agencies stated they have in-house staff dedicated to safety and security oversight, and the agencies have very good working relationships. Oversight agency staff admitted that St. Louis could face challenges in the future if staff turned over in either agency and new employees did not establish a similar working relationship. In addition, officials indicated that, if oversight agency staff had disagreements over safety or security standards, or how to enforce the existing standards, it would be highly problematic. However, officials in the Illinois and Missouri oversight agencies, as well as at MetroLink, thought that the current arrangements have produced one set of standards, good communication, and effective coordination. Both MetroLink and oversight agency staff in St. Louis credited each other with creating an environment where this system of having multiple oversight agencies could work well. In contrast, TOC has implemented a less streamlined process for making decisions, which, according to FTA and TOC officials, may have contributed to the difficulties it has had in responding to FTA information requests. On June 15, 2005, FTA notified TOC that it would perform TOC’s audit in late July 2005. FTA requested information prior to the audit to facilitate the time it spent on-site. TOC did not submit the requested State Safety Oversight program materials despite several FTA requests and an extension by FTA to move the audit to a later date. At the end of August, FTA initiated its audit even though it had not received requested information, but was not able to complete the audit until the end of September, when it received all requested materials. FTA’s Final Audit Report to TOC cited 10 areas for improvement and provided TOC 60 days to resolve these issues. According to FTA, TOC resolved one issue within the time period. FTA held a follow-up review with TOC in mid-March to check on the status of the remaining areas for improvement. As of June 2006, FTA was evaluating how many of the remaining audit findings remained open, although FTA stated that TOC had created a detailed set of internal operating procedures to address many of FTA’s findings and concerns. In addition, TOC representatives stated that some of the areas for improvement FTA found were complicated issues, such as reviewing WMATA’s accident investigation procedures and approving modifications, and could not be addressed within the 60 days FTA initially allowed. TOC staff emphasized that, although WMATA was sometimes slow to respond to TOC audit recommendations or information requests, they were pleased with their relationship with WMATA and that WMATA was responsive to TOC. Similarly, FTA officials stressed that they recognized and appreciated the effort TOC had undertaken in addressing FTA’s findings. TOC staff credited WMATA with helping TOC develop a matrix to track outstanding recommendations and agreeing to meet via conference call on at least a bi-weekly basis to ensure the issues are addressed. Also, TOC members stated that part of the reason they were slow to respond to FTA’s initial requests was that TOC had spent all its allocated funds for the year and, consequently, they had to temporarily stop working with the consultant who had conducted its audits of WMATA and maintained their files. According to TOC officials, since the process for acquiring additional funding would require approval from all three jurisdictions represented on TOC, it was not feasible to obtain additional funding quickly. In addition, TOC cannot take any action without a majority of its members, and at least one member from each jurisdiction, approving the action. Reaching such majority agreements can be time consuming since all members of TOC have other primary responsibilities. This is especially a concern when quick decisions are necessary, such as responding to FTA’s audit recommendations. TOC officials cited several challenges in accomplishing their mission, including lack of a dedicated and permanent funding source, the lengthy process required to obtain approval on planning and implementation of corrective actions, and limited staff time. They also stated that they believed TOC and WMATA receive more scrutiny than other transit and oversight agencies, due to their location in Washington, D.C., and proximity to FTA’s headquarters staff. To address these challenges, the chair of TOC stated that she planned to spend additional time on overseeing WMATA and was hoping to work to find ways to streamline the administrative and funding processes that TOC must navigate. Hiring a full-time administrator, or designating a TOC member to serve in a full-time capacity, could help solve some of these issues. However, funding this position could be a challenge and the administrator would need to have decision-making authority to be effective and act quickly. To provide Congress with a better understanding of how the Federal Transit Administration (FTA) oversees safety and security in rail transit systems and what is known about the impact of the State Safety Oversight program on rail safety and security, we met with FTA management and consultants to discuss the history, mission, and design of the oversight program. In addition, we discussed system safety and risk management approaches used by FTA, but we did not independently verify that oversight agencies use these approaches. We met with officials from other federal agencies such as the Department of Homeland Security (DHS), the Transportation Security Administration (TSA), and the National Transportation Safety Board (NTSB); we also met with the American Public Transportation Association (APTA), a transit industry association. We met with these organizations to determine how oversight responsibilities are shared and coordinated, and the extent to which duplication of safety and security guidance existed among these agencies. We also spoke with a TSA Local Area Supervisor. To compare other transportation safety and security approaches to the oversight program, we interviewed FRA, APTA, and safety officials from Canadian transit agencies. In addition to these interviews, we also reviewed key documents, including rules, regulations, procedures, and guidance of the State Safety Oversight program; the triennial audits FTA performs on oversight agencies; documents tracking the performance of corrective action items; and memorandums of agreement between federal agencies to facilitate safety oversight coordination. At the state level, we reviewed annual reports that the oversight agencies provide to FTA. When states were willing to share these documents, we reviewed audits performed by the oversight agencies on transit agencies (40 percent of the states provided these documents) and authorizing legislation, or executive action, creating the state safety oversight agency (more than 80 percent provided the legislation or executive action). To compare the State Safety Oversight program with other transportation safety approaches, we reviewed our past work on pipeline, aviation, motor carriers, and highway safety. To further our understanding of the design and impact of the program and also to identify challenges facing the program, we conducted semi- structured interviews with oversight and transit agencies. To determine the universe of oversight agencies and rail transit agencies under the State Safety Oversight program, we requested a list from FTA. We compared FTA’s list of transit systems to information published by APTA regarding rail systems currently in operation, as well as those that were under development. In two cases, APTA listed a transit agency that had initiated service as “proposed,” and we were able to resolve this discrepancy by comparing it to the FTA list and checking the agency’s website, which showed that service had been initiated. We contacted all transit and oversight agencies participating in the program that were in operation as of October 2005. This included a total of 25 oversight agencies and 42 rail transit systems. Twenty-four of the twenty-five oversight agencies, and 37 of the 42 rail transit systems, agreed to participate in these interviews. The New Orleans Regional Transit Authority (a transit agency) and Louisiana Department of Transportation (an oversight agency) requested that we exclude them from our review due to the difficulties posed by recovering from Hurricane Katrina, and we agreed to the request. Four additional transit agencies—the Jacksonville Transportation Authority, Chattanooga Area Regional Transportation Authority, Metro-Dade Transit Agency, and Hudson-Bergen Light Rail line—did not participate in our interviews. The semi-structured interview guide included questions concerning issues that could create challenges for the program such as an estimate of the number of FTE employees dedicated to the program, availability of FTA or other federally sponsored safety training to oversight agency employees, state funding schemes to support the program, the workload associated with audit responsibilities, the role of outside contractors to conduct the triennial reviews, employee turnover, and frequency of communication between the transit agencies and federal security agencies. The information collected from our semi-structured interviews with the oversight and transit agencies may be subject to errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, the sources of information that are available to interviewees, or how the data are entered into a database or were analyzed, can introduce unwanted variability in the results obtained in these interviews. However, we took steps in the development of the interview questions, the data collection, and the data analysis to minimize these types of errors. For example, social science survey specialists developed the questions used in the interviews in collaboration with our own subject matter experts. Then, the questions were pretested to ensure that they were relevant, clearly stated, and easy to comprehend. When the data were analyzed, a second independent analyst checked all computer programs. Since the interviews were conducted using an electronic interviewing system, our interviewers entered answers obtained from officials directly into the electronic interview instrument. This eliminated the need to have the data keyed into a database by a third party, thus removing an additional source of error. We also conducted several site visits to further our understanding of the challenges facing the program. We visited 17 transit and 8 oversight agencies in both large and small cities, as well as in states with several rail transit agencies and only one agency; we chose this variety to witness a cross-section of transit agencies and observe the interactions the transit agencies had with their oversight agencies. Complete lists of the transit and oversight agencies we visited are in Table 3 and Table 4, respectively. To determine how the program functions in regions where transit systems cross state boundaries, we visited three systems that crossed state boundaries. To identify how the program may be incorporated into new transit systems, we visited two systems that are in the design or construction phase, and one oversight agency that will eventually oversee a transit agency yet to begin service. We confirmed the accuracy of information presented in the report about state oversight and transit agencies by asking for the agencies to confirm text we sent to them prior to the publication of the report. We conducted our work from August 2005 through June 2006 in accordance with generally accepted government auditing standards. In addition to the contact named above, Catherine Colwell, Assistant Director; Ashley Alley; Colin Fallon; Michele Fejfar; Joah Ianotta; Stuart Kaufman; Joshua Ormond; Tina Paek; Stephanie Purcell; and Raymond Sendejas made key contributions to this report.
The U.S. rail transit system is a vital component of the nation's transportation infrastructure. Safety and security oversight of rail transit is the responsibility of state-designated oversight agencies following Federal Transit Administration (FTA) requirements. In this report, GAO addressed: (1) how the State Safety Oversight program is designed; (2) what is known about the program's impact; and (3) challenges facing the program. We also provide information about oversight of transit systems that cross state boundaries. To do our work we surveyed state oversight agencies and transit agencies covered by FTA's program. FTA designed the State Safety Oversight program as one in which FTA, other federal agencies, states, and rail transit agencies collaborate to ensure the safety and security of rail transit systems. FTA requires states to designate an agency to oversee the safety and security of rail transit agencies that receive federal funding. Oversight agencies are responsible for developing a program standard that transit agencies must meet and reviewing the performance of the transit agencies against that standard. While oversight agencies are to include security reviews as part of their responsibilities, TSA also has security oversight authority over transit agencies. Officials from 23 of the 24 oversight agencies and 35 of the 37 transit agencies with whom we spoke found the program worthwhile. Several transit agencies cited improvements through the oversight program, such as reductions in derailments, fires, and collisions. While there is ample anecdotal evidence suggesting the benefits of the program, FTA has not definitively shown the program's benefits and has not developed performance goals for the program, to be able to track performance as required by Congress. Also, because FTA was reevaluating the program after the September 11, 2001, terrorist attacks, FTA did not keep to its stated 3-year schedule for auditing state oversight agencies, resulting in a lack of information to track the program's trends. FTA officials recognize it will be difficult to develop performance measures and goals to help determine the program's impact, especially since fatalities and incidents involving rail transit are already low. However, FTA has assigned this task to a contractor and has stated that the program's new leadership will make auditing oversight agencies a top priority. FTA faces some challenges in managing and implementing the program. First, expertise varies across oversight agencies. Specifically, officials from 16 of 24 oversight agencies raised concerns about not having enough qualified staff. Officials from transit and oversight agencies with whom we spoke stated that oversight and technical training would help address this variation. Second, transit and oversight agencies are confused about what role oversight agencies are to play in overseeing rail security, since TSA has hired rail inspectors to perform a potentially similar function, which could result in duplication of effort.
The compensation program pays monthly benefits to veterans who have service-connected disabilities (injuries or diseases incurred or aggravated while on active military duty). The pension program pays monthly benefits based on financial need to wartime veterans who have low incomes and are permanently and totally disabled for reasons not service-connected.Disability compensation benefits are graduated in 10 percent increments based on the degree of disability from 0 percent to 100 percent. Eligibility and priority for other VA benefits and services such as health care and vocational rehabilitation are affected by these VA disability ratings. Basic monthly payments range from $103 for 10 percent disability to $2,163 for 100 percent disability. Generally, veterans do not receive compensation for disabilities rated at 0 percent. About 65 percent of veterans receiving disability compensation have disabilities rated at 30 percent or lower; about 8 percent are 100 percent disabled. The most common impairments for veterans who began receiving compensation in fiscal year 2000 were skeletal conditions, tinnitus, auditory acuity impairment rated at 0 percent, arthritis due to trauma, scars, and post-traumatic stress disorder. Veterans may submit claims to any one of VBA’s 57 regional offices. To develop veterans’ claims, veterans service representatives at the regional offices obtain the necessary information to evaluate the claims. This includes veterans’ military service records; medical examinations and treatment records from VA medical facilities; and treatment records from private providers. Once claims are developed, rating veterans service representatives (hereafter referred to as rating specialists) evaluate the claimed disabilities and assign ratings based on degree of disability. Veterans with multiple disabilities receive a single, composite rating. For veterans claiming pension eligibility, the regional office also determines if the veteran served in a period of war, is permanently and totally disabled for reasons not service-connected, and meets the income thresholds for eligibility. If a veteran disagrees with the regional office’s decision, he or she can ask for a review of that decision or appeal to VA’s Board of Veterans Appeals (BVA). BVA makes the final decision on such appeals and can grant benefits, deny benefits, or remand (return) the case to the regional office for further development and reconsideration. After reconsidering a remanded decision, the regional office either grants the claim or returns it to BVA for a final VA decision. If the veteran disagrees with BVA’s decision, he or she may appeal to the U.S. Court of Appeals for Veterans Claims (CAVC). If either the veteran or VA disagrees with the CAVC’s decision, they may appeal to the court of appeals for the federal circuit. VBA continues to experience problems processing veterans’ disability compensation and pension claims. These include large backlogs of claims and lengthy processing times. As acknowledged by VBA, excessive claims inventories have resulted in long waits for veterans to receive decisions on their claims and appeals. As shown in table 1, VBA’s pending workload of rating related claims has almost doubled from fiscal year 1997 to fiscal year 2001. During the same period, VBA’s production of rating-related claims has steadily declined from about 702,000 to 481,000. The greatest increase in inventory and decline in production occurred during fiscal year 2001. Several factors contributed to the significant increase in claims inventory in fiscal year 2001. VBA attributes much of the increase to VCAA. According to VBA, the most significant change resulting from the legislation is the requirement to fully develop claims even in the absence of evidence showing a current disability or a link to military service. As a result of the VCAA, VBA undertook a review of about 98,000 veterans’ disability claims that were previously denied under the CAVC’s Morton decision. In addition, the VCAA has affected the processing of about 244,000 rating-related claims that were pending at the time the VCAA was enacted and all new compensation and pension claims received since the law’s enactment. These claims must be developed and evaluated under the expanded procedures required by the VCAA. VBA believes this will increase the time to process cases. Other contributing factors included the recent addition of diabetes as a presumptive service-connected disability for veterans who served in Vietnam; the need to train many new claims processing employees; and the implementation of new VBA processing software. VBA received about 56,500 diabetes claims through November 2001 and expects to receive an additional 76,000 claims during the remainder of fiscal year 2002. The influx of new claims processing staff during fiscal year 2001 has also temporarily hampered the productivity of experienced staff. According to officials at some of the regional offices we visited, experienced rating specialists had less time to spend on rating work because they were helping train and mentor new rating specialists. Although this may have reduced short-term production, it should enable VBA to increase production in the long term by enhancing the proficiency of new staff. Furthermore, regional office officials noted that the learning curve and implementation difficulties with VBA’s new automated rating preparation system (Rating Board Automation 2000) hampered their productivity. Over the last 3 years, the average time VBA takes to complete rating- related claims has increased from 166 to 181 days – which places it far from reaching its end of fiscal year 2003 goal of 100 days (see fig. 1).During the same period, the average age of pending claims increased from 144 to 182 days. In fiscal year 2001, the average age of pending cases was actually greater than the average time to complete decisions. According to officials at some of the regional offices we visited, staff have recently been focusing on completing simpler and less time-consuming cases. Officials told us that focusing on completing simpler cases might result in increases in production and short-term improvements in timeliness. At the same time, it may also result in the office’s pending inventory getting even older. In addition to problems with timeliness of decisions, VBA acknowledges that the accuracy of regional office decisions needs to be improved. Inaccurate decisions can also lead to delays in resolving claims when veterans appeal to the BVA. Appeals to BVA can add many months to the time required to resolve claims. In fiscal year 2001, the average time to resolve an appeal was 595 days – almost 20 months. VBA has made progress in improving its accuracy; its accuracy rate for rating-related decisions increased from 59 percent in fiscal year 2000 to 78 percent in fiscal year 2001. Beginning in fiscal year 2002, VBA has revised its key accuracy measure to focus on whether regional office decisions to grant or deny claims were correct. This revision to VBA’s quality assurance program is consistent with a recommendation made by the 2001 VA Claims Processing Task Force. VBA has made some progress in improving its production and reducing its inventory but will be challenged to meet the production and inventory goals it has set for fiscal year 2002. Recognizing the need to address VBA’s long-standing claims processing timeliness problem and excessive inventory, the Secretary of Veterans Affairs has made improving claims processing performance in its regional offices one of VA’s top management priorities. Specifically, the Secretary’s end of fiscal year 2003 goals are to complete accurate decisions on rating-related compensation and pension claims in an average of 100 days and reduce VBA’s inventory of such claims to about 250,000. To achieve these goals, VBA is focusing on increasing the number of claims decisions its regional offices can complete. At the same time, VBA has implemented two initiatives to expedite claim decisions. In October 2001, VBA established the Tiger Team at its Cleveland Regional Office, a specialized unit including experienced rating specialists, to expedite the processing of claims for veterans aged 70 and older and clear from the inventory claims that have been pending for over a year. VBA also established nine Resource Centers to process claims from regional offices that are “ready to rate.” A claim is ready to rate after all the needed evidence is collected. To meet the Secretary’s inventory goal, VBA plans to complete about 839,000 rating-related claims decisions in fiscal year 2002. Of these claims, the regional offices are expected to complete about 792,000, while VBA’s Tiger Team and Resource Centers are expected to complete the balance of 47,000 claims. This level of production is greater than VBA has achieved in any of the last 5 fiscal years —VBA’s peak production was about 702,000 claims in fiscal year 1997. However, VBA has significantly more rating staff now than it did in any of the previous 5 fiscal years. VBA’s rating staff has increased by about 50 percent since fiscal year 1997 to 1,753. To reach VBA’s fiscal year 2002 production goal, rating specialists will need to complete on average about 2.5 cases per day – a level VBA achieved in fiscal year 1999. VBA expects this production level to result in an end of year inventory of about 316,000 rating-related claims, which VBA believes would put the agency on track to meet the Secretary’s inventory goal of 250,000 cases by the end of fiscal year 2003. To meet its production goal, in December 2001, VBA allocated its fiscal year 2002 national production target to its regional offices based on each regional office’s capacity to produce rating-related claims given each office’s number of rating staff and their experience levels. For example, an office with 5 percent of the national production capacity received 5 percent of the national production target. In February 2002, VBA revised how it allocated the monthly production targets to its regional offices based on input from regional offices regarding their current staffing levels. In allocating the target, VBA considered each regional office’s fiscal year 2001 claims receipt levels, production capacity, and actual production in the first quarter of fiscal year 2002. To hold regional office managers accountable, VBA incorporated specific regional office production goals into regional office performance standards. For fiscal year 2002, regional office directors are expected to meet their annual production target or their monthly targets in 9 out of 12 months. Generally, the combined monthly targets for the regional offices increase as the year progresses and as the many new rating specialists hired in previous years gain experience and become fully proficient claims processors. The Tiger Team, primarily made up of Cleveland Regional Office staff, was established to supplement regional office capacity. It identifies claims of veterans aged 70 and over as well as those pending for 1 year or more and then requests these claims from the regional offices. The Tiger Team’s 17 rating specialists and 18 veterans service representatives are expected to perform whatever additional development work is needed on the claims they receive and to make rating decisions on these claims. To help expedite development work, VBA has obtained priority access for the Tiger Team to obtain evidence from VA and other federal agencies. For example, VA and the National Archives and Records Administration completed a Memorandum of Understanding in October 2001 to expedite Tiger Team requests for service records at the National Personnel Records Center (NPRC) in St. Louis, Missouri. Also, VBA established procedures and timeframes for expediting Tiger Team requests for medical evidence and examinations. Veterans Health Administration (VHA) medical facilities were, in general, given 3 days to comply with requests for medical records and 10 days to provide reports of medical examinations. As of mid-April 2002, the Tiger Team has completed about 7,800 claims requested from 42 regional offices. From December 2001 through March 2002 the team’s production exceeded its goal of 1,328 decisions per month. According to Tiger Team officials, its experienced rating specialists were averaging about 4 completed ratings per day. Officials added that in the short term, completing old claims might increase VBA’s average time to complete decisions. Meanwhile, the Resource Centers also supplement regional offices’ rating capacity by making decisions on claims that were awaiting decisions at the regional offices. VBA officials noted that the rating specialists at the Resource Centers tend to be less experienced; thus, they are expected to produce fewer ratings per day than the Tiger Team. From October 2001 through March 2002, the Resource Centers had completed about 14,000 ratings. Although VBA has made some progress in increasing production and reducing inventory, achieving its fiscal year 2002 production and inventory goals will be challenging. VBA expects to increase production in the second half of the fiscal year. During the first 6 months of fiscal year 2002, VBA produced about 368,000 decisions – 61,000 per month. To meet its goal of producing 839,000 rating decisions for the fiscal year, VBA must increase its production to about 78,000 decisions a month for the second half of the fiscal year. Meanwhile, the rating-related inventory declined by 2 percent during the first half of fiscal year 2002. To reach VBA’s inventory goal of 316,000 claims by the end of fiscal year 2002, the inventory must decline by another 23 percent over the next 6 months. Officials at some of the regional offices we visited said they were having difficulty reaching their production targets. Some offices were “cherry picking” — completing easier cases in order to meet production goals. Meanwhile, older claims were not being worked. While the Tiger Team is designed to resolve some of these older claims, regional offices will eventually have to handle this workload. Another issue raised by officials at one regional office was inadequate numbers of staff to develop claims for the rating specialists. While VBA has defined capacity based on the number and experience of rating specialists, regional offices also need sufficient veterans service representatives to develop claims for the rating specialists. VBA will likely have difficulty meeting the Secretary’s fiscal year 2003 timeliness goal, even if it meets its production and inventory goals. VBA will have to cut its average claims processing time by more than half – from an average of 224 days in the first half of fiscal year 2002 — to meet the 100 day goal. However, improving timeliness depends on more than just increasing production and reducing inventory. VBA also needs to address long-standing problems affecting timeliness. VBA needs to continue to make progress in reducing delays in obtaining evidence; ensuring that it will have enough experienced staff in the long term; and implementing information systems to help improve claims processing productivity. Furthermore, external factors beyond VBA’s control, such as decisions made by the CAVC and the filing behavior of veterans, will continue to affect VBA’s workload and its ability to make sustained improvements in performance. Much of the delay in completing claims is not related to the time a rating specialist spends on the claim. Rather, delays come in the development process – time waiting for evidence. The Tiger Team has been able to achieve high production levels, in part, through priority access to service and VHA medical records and expedited VHA medical examinations. However, not every regional office can benefit from such expedited access. VBA needs to continue its progress in reducing delays in general. VBA has initiatives to improve its access to evidence needed to decide claims. For example, VBA has established an office at the NPRC to expedite regional office requests for service records. Also, VBA has initiatives to obtain better and more timely medical information from VA medical facilities. VBA has access to VHA’s medical records database. Also, VBA and VHA have established a Joint Medical Examination Improvement Office to help identify ways to improve the quality and timeliness of VHA’s compensation and pension medical examinations. While these initiatives seem promising, it is unclear the extent to which they will improve timeliness. VBA needs to ensure that it can maintain the necessary expertise to process claims as experienced claims decision makers retire over the next several years. To accomplish this, VBA needs to ensure that its new claims processing staff are receiving the necessary training and on the job experience to become proficient and that it retains these employees. VA plans to complete a workforce plan in 2002, which should address VBA’s succession planning needs. Also, VBA needs to continue its progress in implementing its training and performance support system for claims processing staff. Furthermore, VBA needs to overcome delays in implementing its information system improvements. We recently noted that, after 16 years, VBA is still experiencing delays in implementing its replacement benefit delivery system. Also, officials at some of the regional offices we have visited noted that the initial implementation of rating board automation (RBA) 2000 – the application designed to assist rating specialists in rating benefit claims – has reduced their rating production. These challenges affect not only VBA’s ability to meet its fiscal year 2003 goals, but also its ability to sustain the progress it makes in improving claims processing performance. To sustain its progress, VBA needs to be able to maintain increased production levels, so it can deal with future events that could significantly increase its workload. Recent history has shown how actions by VA, the Congress, and the CAVC can have significant impacts on VBA’s workload. For example, VA’s decision to provide compensation to Vietnam veterans with diabetes is having a significant impact on VBA’s workload. By the end of fiscal year 2003, VBA expects to have received 197,500 diabetes claims. VBA has cited the influx of diabetes claims as a factor in its fiscal year 2001 inventory increase. Also, the CAVC’s Morton decision, and the Congress’ reaction in passing the VCAA, show the impact of procedural changes on VBA’s workload. In fiscal year 2000, VBA reduced its rating-related inventory from about 250,000 to about 228,000 in part because regional offices denied more than 98,000 claims as not well-grounded under Morton. However, the overruling of Morton by the VCAA was a major factor in the increase in inventory for fiscal year 2001 and is expected to have a continuing impact on timeliness because of lengthened timeframes for obtaining evidence. VBA is working hard to meet the Administration’s commitment to improve its service to veterans by providing more timely decisions on their claims. VBA is better staffed to meet its claims workload than it has been in recent years. This, in turn, should translate into a more productive VBA workforce in the future. However, increasing staffing is not enough. VBA needs to address many of the same challenges to improving timeliness we reported in May 2000 – such as improving waiting times for evidence. VBA has a number of initiatives to improve its process, including the implementation of the Claims Processing Task Force’s recommendations. VBA needs to continue its progress, while also addressing its future succession planning and information technology needs. By addressing these challenges, VBA can better ensure that it will be able to sustain the performance improvements it makes in fiscal years 2002 and 2003. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions you or Members of the Subcommittee may have. For further contacts regarding this testimony, please call Cynthia A. Bascetta at (202) 512-7101. Others who made key contributions to this testimony are Irene Chu, Steve Morris, Martin Scire, and Greg Whitney. Veterans’ Benefits: Improvements Needed in Processing Disability Claims. GAO/HRD-89-24. Washington, D.C.: June 22, 1989. Veterans’ Compensation: Medical Reports Adequate for Initial Disability Ratings but Need to Be More Timely. GAO/HRD-90-115. Washington, D.C.: May 30, 1990. Veterans’ Benefits: Status of Claims Processing Initiative in VA’s New York Regional Office. GAO/HEHS-94-183BR. Washington, D.C.: June 17, 1994. Veterans’ Benefits: Lack of Timeliness, Poor Communication Cause Customer Dissatisfaction. GAO/HEHS-94-179. Washington, D.C.: September 20, 1994. Veterans’ Benefits: Better Assessments Needed to Guide Claims Processing Improvements. GAO/HEHS-95-25. Washington, D.C.: January 13, 1995. Veterans’ Benefits: Effective Interaction Needed Within VA to Address Appeals Backlog. GAO/HEHS-95-190. Washington, D.C.: September 27, 1995. Veterans’ Benefits: Improvements Made to Persian Gulf Claims Processing. GAO/T-HEHS-98-89. Washington, D.C.: February 5, 1998. Veterans’ Benefits Claims: Further Improvements Needed in Claims- Processing Accuracy. GAO/HEHS-99-35. Washington, D.C.: March 1, 1999. Veterans Benefits Administration: Progress Encouraging, but Challenges Still Remain. GAO/T-HEHS-99-77. Washington, D.C.: March 25, 1999. Veterans’ Benefits: Promising Claims-Processing Practices Need to Be Evaluated. GAO/HEHS-00-65. Washington, D.C.: April 7, 2000. Veterans Benefits Administration: Problems and Challenges Facing Disability Claims Processing. GAO/T-HEHS/AIMD-00-146. Washington, D.C.: May 18, 2000. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-01-255. Washington, D.C.: January 2001.
The Department of Veterans Affairs (VA) will provide $25 billion in compensation and pension benefits in fiscal year 2002 to more than three million veterans, dependents and survivors. For years, the compensation and pension claims process has been subject to long long waits for decision and large claims backlogs. VA's goal for fiscal year 2003 is to complete accurate decisions on rating-related claims in an average of 100 days. To achieve this, the Veterans Benefits Administration (VBA) is focusing on increasing production of rating decisions and reducing the inventory of claims to about 250,000. As of the end of March 2002, VBA was completing claims in an average of 224 days and had an inventory of about 412,000 claims. VBA is trying to significantly increase regional offices' rating decision production to reduce the inventory, and, in turn, reduce the time required to complete decisions. VBA expects to increase production by hiring more staff and increasing the proficiency of new staff. Although VBA has recently increased its production and reduced its inventory, meeting its production and inventory reduction and its timeliness goals will be challenging.
The Internet is a worldwide network of networks comprised of servers, routers, and backbone networks. Network addresses are used to help send information from one computer to another over the Internet by routing the information to its final destination. The protocol that enables the administration of these addresses is the Internet protocol (IP). The most widely deployed version of IP is version 4 (IPv4). The two basic functions of IP include (1) addressing and (2) fragmentation of data, so that information can move across networks. An IP address consists of a fixed sequence of numbers. IPv4 uses a 32-bit address format, which provides approximately 4.3 billion unique IP addresses. Figure 1 provides a conceptual illustration of an IPv4 address. By providing a numerical description of the location of networked computers, addresses distinguish one computer from another on the Internet. In some ways, an IP address is like a physical street address. For example, in the physical world, if a letter is going to be sent from one location to another, the contents of the letter must be placed in an envelope that contains addresses for the sender and receiver. Similarly, if data is going to be transmitted across the Internet from a source to a destination, IP addresses must be placed in an IP header. Figure 2 provides a simplified illustration of this concept. In addition to containing the addresses of sender and receiver, the header also contains a series of fields that provide information about what is being transmitted. The fields in the header are important to the protocol’s second main function: fragmentation of data. IP fragments information by breaking it into manageable parts. Each part has its own header that contains the sender’s address, destination address, and other information that guides it through the Internet to its intended destination. When the various packets arrive at the final destination, they are put back together into their original form. Several key organizations play a role in coordinating protocol development and Internet management issues, including the following: The Internet Corporation for Assigned Names and Numbers, (ICANN), is a nonprofit corporation responsible for Internet address space allocation and management of the Internet domain name system. Regional Internet Registries allocate Internet address blocks from ICANN in various parts of the world and engage in joint projects, liaison activities, and policy coordination. The registries include the African Network Information Center, Asia Pacific Network Information Centre, American Registry for Internet Numbers, Latin American and Caribbean Internet Addresses Registry, and Réseaux IP Européens Network Coordination Centre. Competing companies known as registrars are able to assign domain names, the mnemonic devices used to represent the numerical IP addresses on the Internet (for example, www.google.com). More than 300 registrars have been accredited by ICANN and are authorized to register domain names ending in .biz, .com, .coop, .info, .name, .net, .org, or .pro. A complete listing is maintained on the InterNIC Web site. The Internet Society is a large, international, professional organization that provides leadership in addressing issues that may affect the future of the Internet and assists the groups responsible for Internet infrastructure standards. The Internet Society also provides legal, financial, and administrative support to the Internet Engineering Task Force (IETF). IETF is the principal body engaged in the development of Internet standards. It is composed of working groups that are organized by topic into several areas (e.g., routing, transport, security, etc.). Limited IPv4 address space prompted organizations that need large amounts of IP addresses to implement technical solutions to compensate. For example, network administrators began to use one unique IP address to represent a large number of users. By employing network address translation, an enterprise such as a federal agency or a company could have large numbers of internal IP addresses, but still use a single unique address that can be reached from the Internet. In other words, all computers behind the network address translation router appear to have the same address to the outside world. Figure 3 depicts this type of network configuration. While network address translation has enabled organizations to compensate for the limited number of globally unique IP addresses available with IPv4, the resulting network structure has eliminated the original end-to-end communications model of the Internet. Network address translation complicates the delivery of real-time communications over the Internet. In 1994, IETF began reviewing proposals for a successor to IPv4 that would increase IP address space and simplify routing. IETF established a working group to be specifically responsible for developing the specifications for and standardization of IPv6. Over the past 10 years, IPv6 has evolved into a mature standard. A complete list of IPv6 documents can be found at the IETF Web site. Interest in IPv6 is gaining momentum around the world, particularly in parts of the world that have limited IPv4 address space to meet their industry and consumer communications needs. Regions that have limited IPv4 address space such as Asia and Europe have undertaken efforts to develop, test, and implement IPv6. As a region, Asia controls only about 9 percent of the allocated IPv4 addresses, and yet has more than half of the world’s population. As a result, the region is investing in IPv6 development, testing, and implementation. For example, the Japanese government’s e-Japan Priority Policy Program mandated the incorporation of IPv6 and set a deadline of 2005 to upgrade existing systems in both the public and private sector. The government has helped to support the establishment of the IPv6 Promotion Council to facilitate issues related to development and deployment and to provide tax incentives to promote deployment. In addition, major Japanese corporations in the communications and consumer electronics sectors are also developing IPv6 networks and products. The Chinese government’s interest in IPv6 resulted in an effort by the China Education and Research Network Information Center to establish an IPv6 network linking 25 universities in 20 cities across China. In addition, China has reportedly set aside approximately $170 million to develop an IPv6- capable infrastructure. Taiwan has also started to work on developing IPv6 products and services. For example, the Taiwanese government announced that it would begin developing an IPv6-capable national information infrastructure project. The planned initiative is intended to deploy an infrastructure capable of supporting 6 million users by 2007. In September 2000, public and private entities in India established the Indian IPv6 Forum to help coordinate the country’s efforts to develop and implement IPv6 capabilities and services. The forum hosted an IPv6 summit in 2005. The European Commission initiated a task force in April 2001 to design an IPv6 Roadmap. The Roadmap serves as an update and plan of action for the development and future perspectives of IPv6. It also serves as a way to coordinate European efforts for developing, testing, and deploying IPv6. Europe currently has a task force that has the dual mandate of initiating country/regional IPv6 task forces across European states and seeking global cooperation around the world. Europe’s task force and the Japanese IPv6 Promotion Council forged an alliance to foster worldwide deployment. Latin America also has begun developing projects involving IPv6. Some of these projects include an IPv6 interconnection among all the 6Bone sites of Latin America and a Native IPv6 Network via Internet2. Also in Mexico, the National Autonomous University of Mexico has been conducting research. In 1999, the university acquired a block of address space to provide IPv6-enabled service to Mexico and Latin America. Established in 2001, the North American IPv6 Task Force promotes the use of IPv6 within industry and government and provides technical and business expertise for the deployment of IPv6 networks. The task force is composed of individual members from the United States and Canada who develop white papers and deployment guides, sponsor test and interoperability events, and collaborate with other task forces from around the world. Currently, the task force, the University of New Hampshire, and DOD are collaborating on a national IPv6 demonstration/test network. In 2003, the President’s National Strategy to Secure Cyberspace identified the development of secure and robust Internet mechanisms as important goals because of the nation’s growing dependence on cyberspace. The strategy stated that the United States must understand the merits of, and the obstacles to, moving to IPv6 and, based on that understanding, identify a process for moving to an IPv6-based infrastructure. To better understand these challenges, the Department of Commerce formed a task force to examine the deployment of IPv6 in the United States. As co-chairs of that task force, the Commerce Department’s National Institute of Standards and Technology (NIST) and the National Telecommunications and Information Administration invited interested parties to comment on a variety of IPv6-related issues, including: (1) the benefits and possible uses; (2) current domestic and international conditions regarding the deployment; (3) economic, technical, and other barriers to the deployment; and (4) the appropriate role for the U.S. government in the deployment. As part of the task force’s work, the Department of Commerce issued a draft report in July 2004, Technical and Economic Assessment of Internet Protocol Version 6, that was based on the response to their request for comment. Many organizations and individuals—such as private sector software, hardware, and communications firms, and technical experts—responded, providing their views on the benefits and challenges of adopting the new protocol. The key characteristics of IPv6 include a dramatic increase in IP address space, a simplified IP header for flexibility and functionality, improved routing of data, improved quality of service, and integrated Internet protocol security. These key characteristics of IPv6 offer various enhancements relative to IPv4 and are expected to increase Internet services and enable advanced Internet communications that could foster new software applications for federal agencies. IPv6 dramatically increases the amount of IP address space available from the approximately 4.3 billion addresses in IPv4 to approximately 3.4 × 10IPv6 addresses are characterized by a network prefix that describes the location of an IPv6-capable device in a network and an interface ID that provides a unique identification number (ID) for the device. The network prefix will change based on the user’s location in a network, while the interface ID can remain static. The static interface ID allows a device with a unique address to maintain a consistent identity despite its location in a network. In IPv4, the limited address space has resulted in a plethora of network address translation devices, which severely limits the possibilities for end-to-end communications. In contrast, the massive address space available in IPv6 will allow virtually any device to be assigned a globally reachable address. This change fosters greater end-to-end communication abilities between devices with unique IP addresses and can better support the delivery of data-rich content such as voice and video. Simplifying the IPv6 header promotes flexibility and functionality for two reasons. First, the header size is fixed in IPv6. In the previous version, header sizes could vary, which could slow routing of information. Second, the structure of the header itself has been simplified. While the IPv6 addresses are significantly larger than in IPv4, the header containing the address and other information about the data being transmitted has been simplified. The 14 header fields from IPv4 have been simplified to 8 fields in IPv6. Figure 5 illustrates the differences between the two IP headers, including the various data fields that were eliminated, renamed, or reorganized. Another benefit of the simplified header is its ability to accommodate new features, or extensions. For example, the next header field provides instructions to the routers transmitting the data across the Internet about how to manage the information. The improved routing, or movement of information from a source to a destination, is more efficient in IPv6 because it incorporates a hierarchal addressing structure and has a simplified header. The large amount of address space allows organizations with large numbers of employees to obtain blocks of contiguous address space. Contiguous address space allows organizations to aggregate addresses under one prefix for identification on the Internet. This structured approach to addressing reduces the amount of information Internet routers must maintain and store and promotes faster routing of data. In addition, as shown in figure 5, IPv6 has a simplified header because of the elimination of six fields from the IPv4 header. The simplified header also contributes to faster routing. IPv6 improves mobility features by allowing each device (wired or wireless) to have a unique IP address independent of its current point of attachment to the Internet. As previously discussed, the IPv6 address allows computers and other devices to have a static interface ID. The interface ID does not change as the device transitions among various networks. This enables mobile IPv6 users to move from network to network while keeping the same unique IP address. The ability to maintain a constant IP address while switching networks is cited as a key factor for the success of a number of evolving capabilities, such as evolving telephone technologies, personal digital assistants, laptop computers, and automobiles. IPv6 enhancements can ease difficult and time-consuming aspects of network administration tasks in today’s IPv4 networks. For example, two new configuration enhancements of IPv6 include automatic address configuration and neighbor discovery. These enhancements may reduce network administration burdens by providing the ability to more easily deploy and manage networks. IPv6 supports two types of automatic configuration: stateful and stateless. Stateful configuration uses the dynamic host configuration protocol. This stateful configuration requires another computer, such as a server, to reconfigure or assign numbers to network devices for routing of information, which is similar to how IPv4 handles renumbering. Stateless automatic configuration is a new feature in IPv6 and does not require a separate dynamic host configuration protocol server as in IPv4. Stateless configuration occurs automatically for routers and hosts. Another configuration feature—neighbor discovery—enables hosts and routers to determine the address of a neighbor or an adjacent computer or router. Together, automatic configuration and neighbor discovery help support a plug-and-play Internet deployment for many devices, such as cell phones, wireless devices, and home appliances. These enhancements help reduce the administrative burdens of network administrators by allowing the IPv6- enabled devices to automatically assign themselves IP addresses and find compatible devices with which to communicate. IPv6’s enhanced quality of service feature can help prioritize the delivery of information. The flow label is a new field in the IPv6 header. This field can contain a label identifying or prioritizing a certain packet flow, such as a video stream or a videoconference, and allows devices on the same path to read the flow label and take appropriate action based on the label. For example, IP audio and video services can be enhanced by the data in the flow label because it ensures that all packets are sent to the appropriate destination without significant delay or disruption. IP Security—a means of authenticating the sender and encrypting the transmitted data—is better integrated into IPv6 than it was in IPv4. This improved integration, which helps make IP Security easier to use, can help support broader data protection efforts. IP Security consists of two header extensions that can be used together or separately to improve authentication and confidentiality of data being sent via the Internet. The authentication extension header provides the receiver with greater assurance of who sent the data. The encapsulating security header provides confidentiality to messages using encrypted security payload extension headers. IPv6’s increased address space, functionality, flexibility, and security help to support more advanced communications and software applications than are thought to be possible with the current version of IP. For example, the ability to assign an IP address to a wide range of devices beyond computers creates many new possibilities for direct communication. While applications that fully exploit IPv6 are still in development, industry experts have identified various federal functions that might benefit from IPv6-enabled applications: Border security: could deploy wireless sensors with IPv6 to help provide situational awareness about movements on the nation’s borders. First responders: could exploit the hierarchal addressing of IPv6 to promote interoperability and rapid network configuration in responding to emergencies. Public health and safety: could exploit IPv6 end-to-end communications to deliver secure telemedicine applications and interactive diagnoses. Information sharing: could benefit from various features of IPv6, including securing data in end-to-end communications, quality of service, and the extensibility of the header to accommodate new functions. Key planning considerations for federal agencies include recognizing that an IPv6 transition is already under way because IPv6-capable software and equipment exist in agency networks. Other key considerations for federal agencies to address in an IPv6 transition include significant IT planning efforts and immediate actions to ensure the security of agency information and networks. Important planning considerations include developing inventories and assessing risks, creating business cases for an IPv6 transition, establishing policies and enforcement mechanisms, identifying timelines and methods for the transition. Furthermore, specific security risks could result from not managing IPv6 software and equipment in federal agency networks. The transition to IPv6 is under way for many federal agencies because their networks already contain IPv6-capable software and equipment; for example, most major operating systems currently support IPv6, including Microsoft Windows, Apple OS X, Cisco IOS, mainframe software, and UNIX variants including Sun Solaris and Linux. In addition, many routers, printers, and other devices are now capable of being configured for IPv6 traffic. The transition to IPv6 is different from a software upgrade because the protocol’s capability is being integrated into the software and hardware. As a result, agencies do not have to make a concerted effort to acquire it because it will be built into agencies’ core communications infrastructure. However, as IPv6-capable software and hardware accumulates in agency networks, it can introduce risks that may not be immediately obvious to the network administrators or program officials. For example, agency employees might begin using certain IPv6 features that are not addressed in agency security programs and could therefore inadvertently place agency information at risk of disclosure. Developing an IPv6 inventory and risk assessment is an important action for agencies to consider in addressing IPv6 decision making. An inventory of equipment (software and hardware) provides management with an understanding of the scope of an IPv6 transition occurring at the agency and assists in focusing agency risk assessments. Risk assessments are essential steps in determining what controls are required to protect a network and what level of resources should be expended on controls. Moreover, risk assessments contribute to the development of effective security controls for information systems and much of the information needed for the agency’s system security plans. These assessments are even more important when transitioning to a new technology such as IPv6. Knowing what risks there are and how to mitigate them appropriately will lessen problems in the future. Creating a business case for transition to IPv6 is another important consideration for agency management officials to address. A business case usually identifies the organizational need for the system and provides a clear statement of the high-level system goals. Best practices for IT investment recommend that, prior to making any significant project investment, information about the benefits and costs of the investment should be analyzed and assessed in detail. One key aspect to consider while drafting the business case for IPv6 is to understand how many devices an agency wants to connect to the Internet. This will help in determining how much IPv6 address space is needed for the agency. Within the business case, it is crucial to include how the new technology will integrate with the agency’s existing enterprise architecture. Developing and establishing IPv6 transition policies and enforcement mechanisms are important considerations for ensuring an efficient and effective transition. For example, IPv6 policies can address agency management of the IPv6 transition, roles and responsibilities of key officials and program managers, guidance on planning and investment, authorization for using IPv6 features, and configuration management requirements and monitoring efforts. Further, because of the scope, complexities, and costs involved in an IPv6 transition, effective enforcement of agency IPv6 policies is an important consideration for management officials. Enforcement considerations could include collaboration among the chief information officer and senior contracting officials to ensure IPv6 issues are addressed in information technology acquisitions in accordance with agency policy; role definitions for the chief information officer, inspector general, and program officials, to review current IPv6 capabilities in agency systems and what, if any, future requirements might be needed; and policies for configuration management methods, to ensure that agency information and systems are not compromised because of improper management of information technology and systems. Without appropriate policies and effective enforcement mechanisms, federal agencies could incur significant cost and security risks. As we have previously reported, planning for system migration and security are often problematic in federal agencies. IPv6 planning efforts and security measures can be managed using the federal government’s existing framework, which includes enterprise architecture, investment management processes, and security policies, plans, and risk assessments. The potential scope of an IPv6 transition makes development of robust policies and enforcement mechanisms essential. Considering the costs of IPv6 and estimating the impact on agency IT investments can be challenging. Cost benefit analyses and return-on- investment calculations are the normal methods used to justify investments. Initially, IPv6 may appear to have a minimal cost impact on an organization because IPv6 functionality is being built into operating systems and routers. However, the costs to upgrade existing software applications so they can benefit from IPv6 functionality could be significant. Additional costs to consider include human capital costs associated with training, operational costs of multiple IP environments, existing IT infrastructure, and timing of an IPv6 transition. These costs can be managed through a gradual, rather than an accelerated, transition process. For example, long-range planning can help to mitigate costs and position an agency to benefit from IPv6’s characteristics and applications. Early adopters of IPv6 have determined that transitioning can be coordinated with an organization’s ongoing technical refreshments or upgrades. Accordingly, agencies can ensure that IPv6 compatibility is integrated into their IT contracts and acquisition process. Officials from OMB’s Office of E-Government and Information Technology stated that they recognize the challenges associated with determining cost and are taking action. For example, OMB required federal agencies to submit the following items by January 31, 2005: an updated enterprise architecture documentation and a revised Information Resource Management strategic plan to illustrate how IPv6 is being incorporated into the agency’s plans and a joint memorandum from the agency’s chief information officer and chief procurement official describing how the agency will address the acquisition of technology with IPv6 as part of the life cycle of existing investments. During the year 2000 (Y2K) technology challenge, the federal government amended the Federal Acquisition Regulation (FAR) and mandated that all contracts for IT include a clause requiring the delivered systems or service to be ready for the Y2K date change. This helped prevent the federal government from procuring systems and services that might have been obsolete or that required costly upgrades. Similarly, proactive integration of IPv6 requirements into federal acquisition requirements can reduce the costs and complexity of the IPv6 transition of the federal agencies and ensure that federal applications are able to operate in an IPv6 environment without costly upgrades. Identifying timelines and the various methods available to agencies for transitioning to IPv6 are important management considerations. The timeline can help keep transition efforts on schedule and can provide for status updates to upper management. Having a timeline and transition management strategy in place early is important to mitigating risks and ensuring a successful transition to IPv6. Such timelines and process management can help a federal agency determine when to authorize its various component organizations to allow IPv6 traffic and features. In a dual stack network, hosts and routers implement both IPv4 and IPv6. Figure 6 depicts how dual stack networks can support both IPv4 and IPv6 services and applications during the transition period. Currently, dual stack networks are the preferred mechanism for transitioning to IPv6. Tunneling allows separate IPv6 networks to communicate via an IPv4 network. For example, for one type of tunneling method, IPv6 packets are encapsulated by a border router, sent across an IPv4 network, and decoded by a border router on the receiving IPv6 network. Figure 7 depicts the tunneling process of IPv6 data inside an IPv4 network. Translation allows networks using only IPv4 and networks using only IPv6 to communicate with each other by translating IPv6 packets to IPv4 packets. The use of a translator allows new systems to be deployed as IPv6 only, while older systems remain IPv4 only. While this method may result in bottlenecks while packets are being translated, it can provide a high level of interoperability. These transition methods represent a few of the common approaches for ensuring interoperability between IPv6 and IPv4 communications. They can be used alone or in concert to enable communication among IPv4 and IPv6 networks. However, while such techniques mitigate interoperability challenges, in some instances, they may result in increased security risks if not analyzed and managed. As IPv6-capable software and devices accumulate in agency networks, they could be abused by attackers if not managed properly. For example, IPv6 is included in most computer operating systems and, if not enabled by default, is easy for administrators to enable either intentionally or as an unintentional byproduct of running a program. We tested two IPv6 features—automatic configuration and tunneling—and found that, if not properly managed, they could present serious risks to federal agencies. Automatic configuration can facilitate attacks because a rogue or unauthorized router may reconfigure neighboring devices by assigning them new addresses and routes. Once IPv6 is enabled, almost all operating systems will automatically configure IPv6 addresses, and most will automatically configure additional IPv6 addresses (including global ones) and routes provided by IPv6 routers. For example, with IPv6 enabled, most systems we tested would automatically accept IPv6 router advertisements. This results in hosts automatically adding IPv6 addresses and routes. This can be mitigated by the signing of router renumbering updates with IP Security. We tested the security issues surrounding the automatic configuration and found that, if a computer on the internal network had turned IPv6 on, that computer could use IPv6 services on other systems using IPv6 locally. This activity would not be seen by a typical IPv4 network intrusion detection system, because it would only be looking for anomalous or inappropriate IPv4 behavior and would not detect the IPv6 activity. As previously discussed, tunneling is a transition mechanism that allows IPv6 packets to be sent between computers via IPv4 traffic. When IPv6 packets are tunneled through IPv4, they are invisible to typical network intrusion detection systems and firewalls that are configured for IPv4 traffic but not for IPv6 traffic. As a result, intrusion detection systems and firewalls configured for IPv4 may not identify or prevent tunneled traffic. Once tunnels are established, traffic can penetrate the network undetected. This can allow attackers to access agency information and resources that are protected only by IPv4 filters and tools. Even worse, if a computer on an internal network acted as an IPv6 router and was able to tunnel IPv6 to the IPv4 Internet, other nearby machines could be automatically configured with global IP addresses. As a result, internal agency computers—never intended to directly provide services to other computers on the Internet—are suddenly globally reachable and may lack the requisite security for Internet-accessible hosts. Although new tools are being developed, the security considerations associated with an IPv6 transition make configuration management of federal systems extremely important. We determined that common IPv6 tunneling techniques could be controlled by implementing best practices for IPv4 security, specifically by tightening the firewalls to deny direct outbound connections and by requiring proxies for allowed protocols and ports. We also noted that tighter configuration management, including restricting user privileges, could help control IPv6 usage by end hosts and that network intrusion detection systems could be tuned to detect IPv6 traffic and common tunneling techniques. In April 2005, the United States Computer Emergency Response Team (US- CERT), located at the Department of Homeland Security, issued an IPv6 cyber security alert to federal agencies based on our testing and discussions with DHS officials. The alert warned federal agencies that unmanaged, or rogue, implementations of IPv6 present network management security risks. Specifically, the US-CERT notice informed agencies that some firewalls and network intrusion detection systems do not provide IPv6 detection or filtering capability, and malicious users might be able to tunnel IPv6 traffic through these security devices undetected. US-CERT provides agencies with a series of short-term solutions, including determining if firewalls and intrusion detection systems support IPv6 and implement additional IPv6 security measures and identifying IPv6 devices and disabling if not necessary. Recognizing the importance of planning, DOD has made progress in developing a business case, policies, a timeline, and methods for transitioning to IPv6, but similar efforts at the majority of the other CFO agencies are lacking. Despite these efforts, Defense still faces major challenges in managing its transition to IPv6. The majority of the other CFO agencies report they have not begun to address key transition planning issues, such as developing plans, business cases, and estimating costs. Defense’s transition to IPv6 is a key component of its business case to improve interoperability among many information and weapons systems, known as the Global Information Grid (GIG). The IPv6 component of GIG is to facilitate DOD’s goal of achieving network-centric operations by exploiting these key characteristics of IPv6: enhanced quality of service, and enhanced security features. The increased address space provides DOD with an opportunity to reconstitute its address space architecture to better address the future proliferation of numerous unmanned sensors and mobile assets. Using this architecture, the department plans to use IPv6 as part of the GIG. Although no final decisions have been made, DOD could use the increased address space to render a three-dimensional map of the globe, or theater of combat, using IP addresses as coordinates. This, along with other GIG components, would allow tracking movements of, and maintain detailed information on, military vehicles and individual soldiers in real time. Permitting devices to directly communicate on the move is essential, because DOD wants to use the enhanced mobility and automatic configuration to rapidly deploy networks across the globe. Further, Defense believes that the return to an end-to-end communications security model will allow it to provide greater information assurance by, among other things, providing for more secure peer-to-peer communications. Finally, Defense requires IPv6’s improved quality of service features to enhance many of its other initiatives, such as voice over IP. DOD’s efforts to develop policies, timelines, and methods for transitioning to IPv6 are progressing. Some of the department’s efforts to transition to IPv6 have been under way for approximately 10 years, including the following: In 1995, the Department of the Navy first began working with IPv6, and subsequently deployed IPv6 test beds in 2000 and 2001. In 1998, DOD began, along with our North Atlantic Treaty Organization partners, joint action on IPv6-related issues. In 2003, one of the Navy’s early test beds, the Defense Research and Engineering Network, was selected to be the overall DOD IPv6 pilot. In 2003, the Office of the DOD Chief Information Officer issued a mandate that, as of October 2003, all assets developed, procured, or acquired must be IPv6-capable and, in addition, the assets must maintain interoperability with IPv4 systems capabilities. In 2004, Defense established an IPv6 transition office to provide the overall coordination, common engineering solutions, and technical guidance across the department to support an integrated and coherent transition to IPv6. DOD’s Transition Office performs a central role in coordination of IPv6 planning, including developing detailed guidance and policies for implementing schedules and designs for DOD. This guidance includes deriving departmentwide requirements, technical guidance—including IPv6 addressing—transition techniques, network architecture guidance, and applications development guidance. While the Transition Office provides the overall planning framework, the accountability for the actual transition resides within each of the individual services and defense agencies. These DOD components are to use the core planning guidance, time frames, and metrics that the Transition Office develops within their respective transition models. The Transition Office, under the authority of the Defense Information Systems Agency, is in the early stages of its work and has developed an early set of work products, including a draft system engineering management plan, risk management planning documentation, budgetary documentation, requirements criteria, and a master schedule. The management schedule includes a set of implementation milestones that include DOD’s goal of transitioning to IPv6 by fiscal year 2008. A senior Transition Office official stated that the department plans to develop an end-to-end communications security model by fiscal year 2008 as well. In addition to its internal IPv6 coordination-related activities, the Transition Office has built relationships with other federal agencies, North Atlantic Treaty Organization partners and coalition allies, IETF, and academic institutions, and is currently working with the American Registry of Internet numbers to allocate the requisite IPv6 address space for the department. In parallel with the Transition Office’s efforts, the Office of the DOD Chief Information Officer has created a transition plan that includes sections on transition governance, acquisition and procurement, transition tasks and milestones, and program and budget. The Chief Information Officer has responsibility for ensuring a coherent and timely transition, establishing and maintaining the overall departmental transition plan, and is the final approval authority for any IPv6 transition waivers. Other key players in the department’s transition are the Defense Information Systems Agency, Joint Forces Command, the National Security Agency, and the Defense Intelligence Agency. Although DOD has made substantial progress in developing a planning framework for transitioning to IPv6, it still faces challenges, including developing an inventory of GIG systems that have IPv6-capable software finalizing its IPv6 transition plans, monitoring its operational networks for unauthorized IPv6 traffic, and developing a comprehensive enforcement strategy, including leveraging its existing budgetary and acquisition review process. According to DOD officials, the department recognizes the need to monitor IPv6 traffic and has taken steps to minimize this risk. For example, it has established policies addressing IPv6 use in an operational environment. Unlike DOD, the majority of other federal agencies reporting have not yet initiated transition planning efforts for IPv6. For example, of the 22 agencies that responded, only 4 agencies reported having established a date or goal for transitioning to IPv6. The majority of agencies have not addressed key planning considerations (see table 1). For example, 22 agencies report not having developed a business case, 21 agencies report not having plans, 19 agencies report not having inventoried their IPv6-capable equipment, 22 agencies report not having estimated costs. The increase in IPv6 address space and the other new features of the protocol are designed to promote flexibility, functionality, and security in networks. IPv6 can facilitate the development of a variety of new applications that take advantage of the end-to-end communications it provides. Through the use of IPv6 and associated new applications, federal agencies can have new ways of delivering business service and conducting operations. Nevertheless, transitioning to IPv6 presents federal agencies with challenges, including addressing key planning considerations and taking immediate actions to ensure the security of agency information and networks. By recognizing that an IPv6 transition is under way, agencies can begin developing risk assessments, business cases, policies, cost estimates, timelines, and methods for the transition. If agencies do not address these key planning issues and seek to understand the potential scope and complexities of IPv6 issues—whether agencies plan to transition immediately or not—they will face potentially increased costs and security risks. For example, if federal contracts for IT systems and services do not require IPv6 compatibility, agencies may need to make costly upgrades. Finally, if not managed, existing IPv6 features in agency networks can be abused by attackers who have access to federal information and resources without being detected. Undetected penetrations of federal networks can have far-reaching impacts on the security of both information and the operations it supports. Transitioning to IPv6 is a pervasive challenge for federal agencies that could result in significant benefits to agency services. But such benefits may not be realized if action is not taken to ensure that agencies are addressing the attendant challenges. Recognizing the importance of planning, DOD has made progress addressing some key planning considerations, but still faces challenges. However, the vast majority of federal agencies have not yet started this process. If their respective progress is not monitored closely, it could result in significant costs for the federal government. We recommend that the Director of OMB take the following two actions: 1. Instruct federal agencies to begin addressing key IPv6 planning considerations, including developing inventories and assessing risks, creating business cases for the IPv6 transition, establishing policies and enforcement mechanisms, identifying timelines and methods for transition, as appropriate. 2. Amend the Federal Acquisition Regulation with specific language that requires that all information technology systems and applications purchased by the federal government be able to operate in an IPv6 environment. Because of the immediate risk that poorly configured and unmanaged IPv6 capabilities present to federal agency networks, we are recommending that agency heads take immediate actions to address the near-term security risks, including determining what IPv6 capabilities they may have, and initiate steps to ensure that they can control and monitor IPv6 traffic. We provided a draft of this report to DOD, Commerce, and OMB for review and comment. In providing oral comments, officials from DOD’s IPv6 Transition Office, Commerce’s National Institute of Standards and Technology, and OMB’s Offices of Information and Regulatory Affairs and General Counsel generally agreed with the contents of the report and provided technical corrections, which we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees; the Director, Office of Management and Budget; and the heads of all major departments and agencies. Copies of this report will be made available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact David Powner at (202) 512-9286, or [email protected]; Keith Rhodes at (202) 512-6412, or [email protected]; or J. Paul Nicholas at (202) 512-4457, or [email protected]. Major contributors to this report are listed in appendix II. The objectives of our review were to describe the key characteristics of Internet Protocol version 6 (IPv6); identify the key planning considerations for federal agencies in transitioning to IPv6; and determine the progress made by the Department of Defense (DOD) and other major federal agencies to transition to IPv6. For our first two objectives, the scope included the Department of Commerce, the Office of Management and Budget, and various federal and nonfederal technical experts. For our third objective, we focused on DOD and the other 23 major federal departments and agencies. To describe the key characteristics of IPv6 and identify the key considerations for the federal agencies in transitioning to IPv6, we researched and analyzed technical documents and gathered data from IPv6 experts in government and industry. Specifically, we reviewed a number of key documents and text, including IPv6-related documents from the Internet Engineering Task Force, technical papers on IPv6 capabilities and security issues, the President’s National Strategy to Secure Cyberspace, and responses to the Department of Commerce’s request for comment on the IPv6 transition. In addition, we documented IPv6 characteristics and transition considerations with officials from the National Institute of Standards and Technology, the National Telecommunication and Information Administration, the chief technical officer of the IPv6 Forum, a co-author of the TCP/IP protocol suite, key members of the telecommunications industry, members of the Internet Engineering Task Force and Internet Society, and officials from major software and hardware vendors. Further, we conducted computer security tests using our lab to identify potential IPv6 security challenges, including testing stateful packet filtering firewalls, network intrusion detection systems, and hosts representing a variety of operating systems, including Windows XP/2003, Sun Solaris, Linux variants, and IBM z/OS. We used IPv4 firewall rules that “default deny all” inbound and “default permit all” outbound, and network intrusion detection systems with default signatures. To determine the progress made by DOD and other relevant federal agencies to transition to IPv6, we analyzed DOD’s IPv6 transition plans, guidelines, and transition schedule. In addition, we met with the Office of the DOD Chief Information Officer, members of the DOD IPv6 Transition Office, and the Defense Information Systems Agency, and reviewed transition challenges and approaches being undertaken by DOD. We also surveyed the other 23 chief financial officer agencies to determine the extent to which they had established a transition date for converting to IPv6; developed IPv6 business cases or transition plans; estimated costs or allocated money for the transition; and identified resource challenges. We performed our work from August 2004 through April 2005 in accordance with generally accepted government auditing standards. Camille Chaires, West Coile, Jamey Collins, John Dale, Neil Doherty, Nancy Glover, Richard Hung, Hal Lewis, Harold Podell, David Plocher, and Eric Winter made key contributions to this report.
The Internet protocol (IP) provides the addressing mechanism that defines how and where information such as text, voice, and video move across interconnected networks. Internet protocol version 4 (IPv4), which is widely used today, may not be able to accommodate the increasing number of global users and devices that are connecting to the Internet. As a result, IP version 6 (IPv6) was developed to increase the amount of available IP address space. It is gaining momentum globally from regions with limited address space. GAO was asked to (1) describe the key characteristics of IPv6; (2) identify the key planning considerations for federal agencies in transitioning to IPv6; and (3) determine the progress made by the Department of Defense (DOD) and other major agencies to transition to IPv6. The key characteristics of IPv6 are designed to increase address space, promote flexibility and functionality, and enhance security. For example, by using 128-bit addresses rather than 32-bit addresses, IPv6 dramatically increases the available Internet address space from approximately 4.3 billion addresses in IPv4 to approximately 3.4 x 10^38 in IPv6. Key planning considerations for federal agencies include recognizing that the transition is already under way, because IPv6-capable software and equipment already exists in agency networks. Other important agency planning considerations include developing inventories and assessing risks; creating business cases that identify organizational needs and goals; establishing policies and enforcement mechanisms; determining costs; and identifying timelines and methods for transition. In addition, managing the security aspects of an IPv6 transition is another consideration since IPv6 can introduce additional security risks to agency information. For example, attackers of federal networks could abuse IPv6 features to allow unauthorized traffic or make agency computers directly accessible from the Internet. DOD has made progress in developing a business case, policies, timelines, and processes for transitioning to IPv6. Despite these efforts, challenges remain, including finalizing plans, enforcing policy, and monitoring for unauthorized IPv6 traffic. Unlike DOD, the majority of other major federal agencies reported not yet having initiated key planning efforts for IPv6. For example, 22 agencies lack business cases; 21 lack transition plans; 19 have not inventoried IPv6 software and equipment; and none had developed cost estimates.
Since September 11, 2001, the federal government has emphasized the need for a coordinated response to maritime threats. In December 2004, the White House issued National Security Presidential Directive 41 (NSPD- 41)/Homeland Security Presidential Directive 13 (HSPD-13), Maritime Security Policy, defining maritime domain awareness as the effective understanding of anything associated with the global maritime domain that could impact the security, safety, economy, or environment of the United States. NSPD-41/HSPD-13 also directed the Secretaries of Defense and of Homeland Security to jointly lead an interagency effort to prepare a National Strategy for Maritime Security to align all federal government maritime security programs and initiatives into a comprehensive and cohesive national effort involving appropriate federal, state, local, and private sector entities. Interagency coordination for maritime domain awareness is primarily exercised within the Maritime Security Interagency Policy Committee, which reports to the National Security Council Deputies Committee. A Maritime Domain Awareness Stakeholders Board consists of representatives from all departments and the intelligence community advises the Maritime Security Interagency Policy Committee through its Executive Steering Committee. DOD, the Department of Homeland Security, and the Department of Transportation have all appointed executive agents for maritime domain awareness who, together with a representative of the intelligence community, constitute the Maritime Domain Awareness Stakeholder Board Executive Steering Committee. DOD Directive 2005.02E establishes policy and roles and responsibilities for maritime domain awareness within DOD. This directive designated the Under Secretary of Defense for Policy as Office of the Secretary of Defense Principal Staff Assistant to oversee the activities of the DOD Executive Agent for Maritime Domain Awareness and designated the Secretary of the Navy as the DOD Executive Agent for Maritime Domain Awareness. In addition, the directive establishes several management functions that the Executive Agent is required to conduct for maritime domain awareness, including: Overseeing the execution of maritime domain awareness initiatives within DOD and coordinating maritime domain awareness policy with the Under Secretary of Defense (Policy); Developing and distributing goals, objectives, and desired effects for maritime domain awareness, in coordination with the Under Secretary of Defense (Policy) and the Under Secretary of Defense (Intelligence); Identifying and updating maritime domain awareness requirements and resources for the effective performance of DOD missions; and Recommending DOD-wide maritime domain awareness planning and programming guidance to the Under Secretary of Defense (Policy) and the Director of Programming, Analysis, and Evaluation (now the Office of Cost Assessment and Program Evaluation). The Secretary of the Navy issued an instruction in January 2009 that assigned the Chief of Naval Operations with responsibility for achieving maritime domain awareness within the Navy. This responsibility includes aligning Navy guidance with DOD policy guidance and coordinating with the Joint Staff to ensure that combatant commands have the necessary Navy resources to support their respective maritime domain awareness requirements. In May 2009, the DOD Executive Agent for Maritime Domain Awareness requested that the Joint Staff solicit maritime domain awareness annual plans from the military services, combatant commands, and defense intelligence components, as required by DOD Directive 2005.02E. In December 2009, the DOD Executive Agent completed an assessment of DOD components’ annual maritime domain awareness plans. The effort was intended to provide the Executive Agent with a “horizontal look” at maritime domain awareness concerns across DOD. The Executive Agent used information from the plans to: (1) gather program and project priorities, (2) formulate and update overarching DOD maritime domain awareness goals and objectives, (3) craft programming and planning recommendations, and (4) synchronize and align combatant command and component efforts and resources. The DOD Executive Agent is currently conducting an assessment of 2010 component plans. DOD relies on organizations both within and outside of the department to achieve maritime domain awareness. The Office of Naval Intelligence is a core element of Global Maritime Intelligence Integration, whose goal is complete maritime domain awareness and their primary mission is to produce meaningful maritime intelligence. The Office of Naval Intelligence produces a Common Operating Picture and Common Intelligence Picture, both of which are compiled from multiple sources of intelligence. The Office of Naval Intelligence, together with the Coast Guard’s Intelligence Coordination Center, compiles and provides a list of vessels of interest to DOD and Department of Homeland Security (DHS) components. In addition, the National Maritime Intelligence Center, created by the Director of National Intelligence, serves as the integration point for maritime information and intelligence collection and analysis in support of national policy and decision makers, maritime domain awareness objectives, and interagency operations at all levels. DOD, combatant commands, and joint task forces leverage numerous capabilities to enhance maritime domain awareness, including intelligence, surveillance, and reconnaissance collection platforms; intelligence fusion and analysis; and information sharing and dissemination. These capabilities assist DOD in responding to the range of maritime challenges, some of which are identified in figure 1. A range of platforms, such as sensors on naval vessels and aircraft, provide intelligence, surveillance, and reconnaissance collection capabilities. Once maritime domain awareness related data is collected, fusion and analysis capabilities assist DOD combatant commands and joint task forces to combine data from a variety of sources to provide information that may include location, course, destination, cargo, crew, and passengers of a given vessel. In addition, DOD uses a number of capabilities to promote the sharing and dissemination of maritime domain awareness information. For example, the Maritime Safety and Security Information System uses an existing, worldwide vessel safety system—the Automatic Information System—to produce an unclassified, Internet- based, password-protected ship tracking system. Currently, more than 50 nations participate in the Maritime Safety and Security Information System. In addition, DOD is working with other international partners to set up more advanced networks to share information. To validate joint warfighting requirements, including those associated with maritime domain awareness, DOD uses its Joint Capabilities Integration and Development System. The primary objective of the system is to ensure the capabilities required by the joint warfighter are identified with their associated operational performance criteria in order to successfully execute assigned missions. The Joint Requirements Oversight Council oversees this system and Functional Capabilities Boards, headed by a general, admiral, or government civilian equivalent, support the council by evaluating capability needs, recommending enhancements, examining joint priorities, and minimizing duplication of effort across the department. There are eight Functional Capabilities Boards: Battlespace Awareness, Building Partnerships, Command and Control, Force Application, Force Support, Logistics, Net-Centric, and Protection. DOD has articulated a broad strategy for maritime domain awareness and identified numerous maritime capability gaps through various documents. However, DOD does not have a departmentwide strategy that adequately defines roles and responsibilities for addressing gaps, aligns objectives with national strategy, and includes measures to guide the implementation of maritime domain awareness efforts, measure progress, and assess and manage risk associated with capability gaps. We previously reported that it is standard practice to have a strategy that lays out goals and objectives, identifies actions for addressing those objectives, allocates resources, identifies roles and responsibilities, and measures performance against objectives. The federal government, DOD, and its components have developed a number of documents that incorporate some of these key elements of an overall strategy for maritime domain awareness. Examples include the following: The National Strategy for Maritime Security broadly identifies threats to maritime security and strategic objectives and actions needed to achieve maritime security. The National Plan to Achieve Maritime Domain Awareness is intended to guide the execution of the security plans tasked in NSPD- 41/HSPD-13. It supports the National Strategy for Maritime Security by outlining broad goals, objectives, threats, and priorities in order to coordinate maritime domain awareness efforts at the federal level. U.S. Northern Command and U.S. Pacific Command worked with the Joint Staff to develop DOD’s Maritime Domain Awareness Joint Integrating Concept to, among other things, provide a common vision for the future of maritime domain awareness related operations within DOD, identify maritime domain awareness capabilities and tasks and conditions for each capability, and inform future capability analyses. The DOD’s Executive Agent for Maritime Domain Awareness completed an annual assessment of maritime domain awareness plans prepared by several DOD commands, military services, and defense intelligence components. The assessment organized the analyzed information from the plans into three critical areas where it determined that DOD must focus and expand its efforts: increased information sharing, enhanced situational awareness, and enhanced data on vessels, cargo, and people. We found that these documents and others DOD and the Navy have developed demonstrate a considerable amount of effort toward defining and organizing DOD’s maritime domain awareness efforts, but we determined that they do not have several key elements that a strategy should contain. DOD’s Maritime Domain Awareness Joint Integrating Concept and the Assessment of U.S. Defense Components Annual Maritime Domain Awareness Plans are two of the key documents used to guide current maritime domain awareness efforts and execute the national strategies. Table 1 summarizes the desirable characteristics of a strategy and compares the elements contained in DOD’s Maritime Domain Awareness Joint Integrating Concept and the DOD Executive Agent’s Assessment of the U.S. Defense Components Annual Maritime Domain Awareness Plans 2009. DOD and its components have completed or are developing additional efforts that may assist the department in organizing its maritime domain awareness efforts. The Department of the Navy developed a strategy for maritime domain awareness in response to a congressional committee report requirement, and several draft maritime domain awareness roadmaps to guide the Navy’s implementation of maritime domain awareness. Additionally, as of November 2010, the Chief of Naval Operation’s Information Dominance Office was developing a Navy Intelligence, Surveillance, and Reconnaissance Roadmap that outlines the Navy’s vision for capabilities needed to fulfill its missions and priorities, including maritime domain awareness. As of November 2010, U.S. Pacific Command was in the process of drafting a maritime domain awareness concept of operations. This concept of operations is intended to provide a common understanding of intelligence support to maritime domain awareness throughout the combatant command. In June 2010, an interagency working group issued the Current State Report, a reference document which identifies maritime domain awareness tasks, capabilities gaps, and ongoing efforts related to each gap. Finally, in July 2010, the DOD Executive Agent for Maritime Domain Awareness developed maritime domain awareness planning and programming recommendations, which were based, among other things, on the 2009 annual maritime domain awareness plans submitted by DOD components to the Executive Agent. While these efforts may help the individual components work towards more effective maritime domain awareness, developing a departmentwide strategy that clearly outlines objectives and roles and responsibilities will better position DOD to align more detailed objectives with national strategies and coordinate the results of ongoing and future efforts across the department. As part of the overall framework for successful strategies, prior GAO work has also emphasized the importance of allocating resources, measuring performance, and monitoring progress as sound management practices critical for decision making and achieving results in specified time frames. While DOD, its interagency partners, and other DOD components have identified numerous capability gaps, DOD does not have a risk-based approach for assessing its maritime capabilities and gaps. Although some interagency-level and DOD component-level documents have prioritized maritime domain awareness capability gaps in comparison to other maritime gaps, the identified gaps have not been allocated resources within DOD. Additionally, DOD does not measure performance and monitor progress in implementing maritime domain awareness and addressing these gaps. We assessed a number of DOD and interagency documents to determine the extent to which resource allocation and performance measurement were incorporated and found mixed results. Examples include: National Maritime Domain Awareness Interagency Investment Strategy. DOD representatives collaborated with interagency stakeholders to develop a document that identified critical tasks and recommended lead and supporting federal agency stakeholders to coordinate interagency activities to address these tasks. However, the Interagency Investment Strategy is not what is traditionally considered an investment strategy with developed cost estimates or proposed dollar amounts for each agency to invest. Instead, it identifies critical capability gaps and makes recommendations on areas for interagency efforts. For example, it recommended that DOD work with DHS and the Office of the Director of National Intelligence to establish national data standards for maritime domain awareness. Interagency Solutions Analysis Current State Report. The Current State Report provides the status of maritime domain awareness capability gaps, solutions, and tools in use to address those gaps and the effectiveness of those solutions to mitigate the gaps. This document is an output of the Interagency Solutions Analysis Working Group, a group of interagency subject matter experts that are comparing current capabilities against scenarios that required, among other things, information sharing and other capabilities in the maritime domain. The DOD Executive Agent for Maritime Domain Awareness, the Department of the Navy, and the Office of Naval Intelligence participated in this process. However, this document does not identify resources to address identified gaps. Additionally, this document does not provide metrics to assess performance or monitor progress in addressing identified gaps. Department of Defense Maritime Domain Awareness Joint Integrating Concept. This document identifies required capabilities, associated tasks, and the DOD joint capability area for each required capability and each associated task. However, it does not identify how resources should be targeted to address the capabilities and tasks nor does it assign specific components within DOD to address each capability and task. Additionally, this document does not contain milestones for measuring progress in addressing the capability gaps and tasks will be measured. Assessment of the U.S. Defense Components Annual Maritime Domain Awareness Plans 2009. The DOD Executive Agent solicited maritime domain awareness annual plans from DOD combatant commands, military services, and defense intelligence components. The plans outlined each component’s planned maritime domain awareness capabilities and described current gaps. The Executive Agent assessed the plans and listed critical areas for expanded focus and efforts. However, several DOD components did not submit plans, so the assessment may not include departmentwide data. Also, as identified in table 1, this assessment does not incorporate several key elements that would help guide DOD’s implementation of maritime domain awareness including an allocation of resources and investments, performance measures, and a mechanism to monitor progress. Department of the Navy Initial Capabilities Document for Data Fusion and Analysis Functions of Navy Maritime Domain Awareness. This 2009 Navy document summarized a capabilities-based assessment that identified capability shortfalls and recommended approaches to improve Navy’s overall maritime domain awareness capability. According to some DOD officials this initial capabilities document reflects the Navy’s view, but not necessarily the views of other DOD components and interagency stakeholders. For example, many Navy maritime domain awareness documents are Navy-centric and it is unclear how they align with interagency efforts. Lastly, the Navy initial capabilities document does not resource identified gaps. These documents articulated broad strategic goals for maritime domain awareness and identified several critical capability gaps; however, DOD has not allocated resources to these efforts. Additionally, the Department of the Navy initial capabilities document, DOD’s Maritime Domain Awareness Joint Integrating Concept, and the National Maritime Domain Awareness Working Group Interagency Investment Strategy gaps were separately approved by DOD’s Joint Requirements Oversight Council, but DOD has not developed a departmentwide capability gap assessment for approval by the council. We also previously reported that the requirements determination process is more focused on the needs of military services than the joint warfighter, and combatant commands and defense intelligence agency needs are often not incorporated into this process. A departmentwide strategy, including a capability gap assessment, would assist DOD in assessing and prioritizing maritime domain awareness capability gaps that have already been identified through various service and interagency efforts in order to integrate them into its corporate processes—such as the Joint Capabilities Integration Development System—for determining requirements and allocating resources. Interagency maritime domain awareness documents identified maritime capability gaps and designated DOD as the lead agency to address some of these gaps. For example, in October 2005, the National Plan to Achieve Maritime Domain Awareness identified numerous near- and long-term maritime domain awareness priorities relating to maritime capabilities, and listed DOD as the lead agency for 22 of these priorities. In May 2007, the National Maritime Domain Awareness Requirements and Capabilities Working Group developed the National Maritime Domain Awareness Study Interagency Investment Strategy, which prioritized capability gaps. The Interagency Investment Strategy listed DOD as the lead or co- lead agency to address a majority of the prioritized gaps. The Maritime Domain Awareness Steering Executive Steering Committee approved an execution plan for a maritime domain awareness Interagency Solutions Analysis which would develop a coordinated, interagency approach for addressing previously identified gaps. In April 2010, the Interagency Solutions Analysis Working Group decided to focus immediate efforts on closing existing gaps related to information about the three areas of people, cargo, and vessels for the interagency group to initially address. In addition to interagency efforts, DOD and Navy documents have identified maritime domain awareness capability gaps related to the department’s ability to collect, analyze, and share information on maritime vessels. For example, DOD’s Maritime Domain Awareness Joint Integrating Concept identified required capabilities that the joint forces will need to address in order to conduct future operations to develop and maintain awareness of the maritime domain. In addition, DOD is conducting a Maritime Domain Awareness Joint Integrating Concept capabilities-based assessment that is considering current and programmed capabilities through 2012 in addition to projections of future programs. An initial capabilities document for this assessment was approved on November 29, 2010. This capabilities-based assessment is also intended to validate the Maritime Domain Awareness Joint Integrating Concept and provide a baseline of maritime domain awareness elements to inform interagency efforts. Key themes have emerged through the identification of capability gaps in several national, interagency, and department documents that DOD may need to address to support maritime domain awareness. DOD components have also identified maritime domain awareness capability gaps. While initial capability assessments share common themes, there has not been a departmentwide prioritization of these capability gaps. As DOD components start developing solutions for these gaps and allocating resources, the absence of a departmentwide prioritization may result in unnecessary duplication of efforts or redundancy in addressing shared capability gaps. A departmentwide prioritization, determined by a comprehensive, risk-based approach would assist decision makers in more effectively allocating resources to the joint forces departmentwide and contribute to interagency efforts to prioritize maritime capability gaps. DOD has not assessed the risk associated with its maritime capability gaps, in addition to not prioritizing these gaps. As we have previously reported, an agency’s strategic plan should, among other things, address risk-related issues that are central to the agency’s mission. To provide a basis for analyzing these risk management strategies, we have developed a framework based on industry best practices and other criteria. This framework, shown in figure 2, divides risk management into five major phases: (1) setting strategic goals and objectives, and determining constraints; (2) assessing risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and results achieved. Even though DOD, its interagency partners, and its components have made efforts to identify and start prioritizing capability gaps, DOD does not have a departmentwide risk assessment to address high priority capability gaps. DOD Directive 2005.02E, which establishes the department’s policy for maritime domain awareness, states that the department will determine its resource priorities and awareness levels needed to persistently monitor the maritime domain. The 2010 Quadrennial Defense Review states that risk management is central to effective decision-making. As shown in table 1, we have previously reported that risk assessment and risk management are desirable characteristics of national strategies. We have described risk assessments as including an analysis of threats to, and vulnerabilities of, critical assets and operations. The results of risk assessments may be used to define and prioritize related resource and operational requirements. Currently, maritime domain awareness is prioritized through various mechanisms across DOD, instead of through a departmentwide approach. For example, DOD’s combatant commands and components prioritize maritime domain awareness differently based upon their respective missions. Additionally, when prioritizing capabilities across DOD, maritime domain awareness falls into multiple capability areas. For example, according to DOD documents and DOD officials, maritime domain awareness capabilities are assessed under multiple joint capability areas and functional capability boards through the Joint Capabilities Integration and Development System process. Figure 3 illustrates this. The various interagency and DOD views on capability gaps and priorities may not provide a full assessment of the risks associated with these gaps at a departmentwide level. Table 2 illustrates that current DOD-wide documents do not meet all of GAO’s criteria for a risk assessment. Prior GAO work has cited that while principles of risk management acknowledge that risk generally cannot be eliminated altogether, enhancing protection from known or potential threats can serve to significantly reduce risk. Efforts such as The Maritime Domain Awareness Joint Integrating Concept and Assessment of U.S. Defense Component Annual Maritime Domain Awareness Plans have demonstrated DOD’s progress in identifying capability gaps related to maritime domain awareness, but have not been included in a larger, departmentwide maritime domain awareness risk assessment. As a result, DOD may lack the insight needed to actively manage the risk associated with identified capability gaps. Additionally, because maritime domain awareness is a broad interagency effort, DOD may be unable to effectively coordinate with its interagency partners in the absence of a clear departmentwide strategy for maritime domain awareness. Consolidating these component efforts to prioritize capability gaps into a comprehensive departmentwide approach to risk management may facilitate developing solutions for each gap. A strategy that includes a comprehensive, risk-based approach to managing maritime domain awareness, including a departmentwide assessment of the critical capabilities, may also provide better information to decision makers about the potential implications of policy and resourcing decisions both within DOD and across the interagency. Our prior work has shown that a strategy including goals, roles, and responsibilities; resource allocation; and performance measures can help ensure that agencies are supporting national and interagency objectives. Achieving maritime domain awareness requires cooperation across a range of agencies throughout the federal, state, and local levels. DOD has a lead role in maritime domain awareness both because it serves as a key enabler for its own maritime activities and because DOD is positioned to provide so many of the resources which assist other agencies in meeting their respective maritime domain awareness needs. It is important that DOD components’ efforts are consolidated together and aligned amongst each other to ensure that departmentwide maritime domain awareness needs are met and appropriate contributions to the efforts of its interagency partners are made. In the absence of a departmentwide strategy for maritime domain awareness, including the prioritized allocation of resources to maritime domain awareness, measures of performance in meeting the goals and objectives, monitoring of progress in addressing capability gaps, and assessing risk, DOD may not be effectively managing its maritime domain awareness efforts. Efforts on the part of DOD combatant commands, military services, the DOD Executive Agent for Maritime Domain Awareness, and interagency working groups resulted in the identification of several capability gaps, some identified by multiple components. The next step in achieving effective departmentwide maritime domain awareness would be a departmentwide strategy and risk assessment that incorporates these efforts. As DOD and the rest of government face increasing demand and competition for resources, policymakers will confront difficult decisions on funding priorities. Threats to the maritime domain are numerous and include the use of large merchant vessels to transport weapons of mass destruction; explosive- laden suicide boats as weapons; and vessels to smuggle people, drugs, weapons, and other contraband. The importance and vulnerabilities of the maritime domain require that efforts be made to reduce the risk of maritime threats and challenges, such as a terrorist attack or acts of piracy. Additionally, a comprehensive, risk-based approach would help DOD capitalize on the considerable effort it and its components have already devoted to maritime domain awareness, make the best use of resources in a fiscally constrained environment, and contribute to interagency efforts to address maritime threats. A strategic, risk-based approach is particularly important in light of emerging threats in the maritime domain and an increased strain on government resources. Such a departmentwide approach will provide DOD with important tools that can assist in confronting the myriad policy and fiscal challenges the department faces. To improve DOD’s ability to manage the implementation of maritime domain awareness across DOD we recommend that the Secretary of Defense direct the Secretary of the Navy, as DOD’s Executive Agent, to take the following two actions: Develop and implement a departmentwide strategy for maritime domain awareness that, at a minimum Identifies DOD objectives and roles and responsibilities within DOD for achieving maritime domain awareness, and aligns efforts and objectives with DOD’s corporate process for determining requirements and allocating resources; and Identifies responsibilities for resourcing capability areas and includes performance measures for assessing progress of the overall strategy that will assist in the implementation of maritime domain awareness efforts. In collaboration with other maritime interagency stakeholders, such as the Coast Guard and the National Maritime Intelligence Center, perform a comprehensive risk-based analysis to include consideration of threats, vulnerabilities, and criticalities relating to the management of maritime domain awareness in order to prioritize and address DOD’s critical maritime capability gaps and guide future investments. In written comments on a draft of the prior, sensitive report, DOD concurred with our recommendations and discussed actions they are taking—or plan to take—to address them. DOD’s written comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we have incorporated into the report where appropriate. In concurring with the first recommendation, DOD stated that they have completed the initial policy, goals, and objectives for maritime domain awareness and promulgated it in a document to all DOD components. DOD also stated their intent to identify responsibilities for resourcing capability gaps and performance measures for assessing progress in achieving maritime domain awareness. DOD identified further steps it is taking to establish objectives for maritime domain awareness, assign appropriate roles and responsibilities, and conduct a second assessment of annual maritime domain awareness plans to inform DOD’s overall effort to develop a departmentwide strategy. We believe these actions will address the intent of our recommendation and better enable DOD to address maritime capability gaps. DOD also concurred with our second recommendation. DOD stated that it will collaborate with the other principal members of the National Maritime Domain Awareness Coordination Office to develop a comprehensive, risk- based approach for maritime domain awareness. The DOD Executive Agent is also requesting that DOD components include risk assessments in their annual maritime domain awareness plans. We believe these actions will address the intent of our recommendation and help DOD prioritize its maritime capability gaps and guide future investment decisions. We are distributing this report to the Secretary of Defense, the Secretary of the Navy, and other relevant DOD officials. We are also sending copies of this report to interested congressional committees. The report is also available on our Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. We were initially asked to look at four questions: (1) to what extent has the Department of Defense (DOD) developed the capability to perform intelligence, surveillance, and reconnaissance (ISR) activities in the maritime domain; (2) to what extent has DOD integrated the maritime domain awareness investment strategy into its overall ISR capability investment strategy; (3) to what extent does DOD have operational plans, planning and coordination structures in place to meet maritime domain awareness and maritime homeland defense requirements; and (4) what gaps, if any, exist in DOD’s ability to identify maritime threats, achieve fusion of information sources from interagency and international partners, coordinate a response, and deploy forces to address identified threats at all relevant distances from the United States. We agreed with the requesters to respond to this request with two reports. The first report focuses on maritime capabilities and the second report will focus on maritime homeland defense. However, we considered the homeland defense perspective when we determined our site visits so we could gather relevant data on maritime homeland defense where possible and feasible to support the second report. As a result, we focused the scope of this audit on geographic combatant commands that had both a maritime focus and a homeland defense mission set. We determined that U.S. Northern Command, U.S. Southern Command, and U.S. Pacific Command met this criteria, and we conducted site visits to a facilities, such as operations centers, engaged in both maritime domain awareness and homeland defense that support these combatant commands. To determine what capabilities DOD currently uses to support maritime domain awareness, what gaps still exist and how these gaps are prioritized, we assessed capability needs established in national guidance such as the National Plan to Achieve Maritime Domain Awareness and DOD guidance such as the Joint Integrating Concept and DOD Directive 2005.02E, which establishes DOD policy for maritime domain awareness. We compared this information with current capabilities and gaps described by combatant command, military service, and supporting intelligence agency’s officials during interviews and site visits. For example, we visited several combatant and joint operation centers to observe what capabilities were used at maritime operations centers. In addition, we evaluated DOD’s efforts to prioritize capability gaps against established DOD acquisition processes such as the Joint Capabilities Integration and Development System. We reviewed prior GAO work on risk management and compared it to existing DOD maritime domain awareness capability documents to determine the extent to which DOD applies a risk-based approach to managing capabilities and identified gaps related to maritime domain awareness. To determine the extent to which DOD developed a strategy to address maritime domain awareness capability gaps, we reviewed prior GAO work on strategic planning including GAO’s work on assessing specific components of national strategies. Given that there is no established set of requirements for strategies, we relied on GAO assessments of national strategies and the criteria that were applied to assess these strategies. We identified six desirable characteristics that national or departmentwide strategies should contain. We assessed these criteria against existing DOD and component-level documents such as the Joint Integrating Concept, the DOD Executive Agent’s Assessment of the U.S. Defense Components Annual Maritime Domain Awareness Plans 2009, and the Department of the Navy’s capability assessment and roadmaps to determine the extent to which these documents contain the elements of a departmentwide strategy. We specifically focused our assessment on the two departmentwide efforts to identify a maritime domain awareness strategy, DOD’s Maritime Domain Awareness Joint Integrating Concept and DOD’s Executive Agent for Maritime Domain Awareness’s Assessment of U.S. Defense Components Annual Maritime Domain Awareness Plans 2009. To determine the extent to which DOD has allocated resources, measured performance and monitored progress in addressing identified capability gaps, we reviewed the same documents noted above to see if identified gaps were resourced within DOD, and if implementation and monitoring programs were discussed in relation to these gaps. We also assessed the information described in these documents against information obtained from combatant command, military service, and supporting intelligence agency’s officials during interviews and site visits. To evaluate our reporting objectives, we obtained relevant national, interagency, and DOD-level documentation and interviewed officials from the following DOD components and interagency partners: Under Secretary of Defense (Intelligence) Office of the Assistant Secretary of Defense for Homeland Defense and America’s Security Affairs Defense Intelligence Agency Defense Intelligence Operations Coordination Center National Geospatial-Intelligence Agency Under Secretary of Defense (Acquisition, Technology and Logistics) Joint Chiefs of Staff Department of the Navy Executive Agent for Maritime Domain Awareness Office of the Chief of Naval Operations (N3/N5) Office of the Chief of Naval Operations, Information Dominance Division (N2/N6) Office of the Chief Information Officer Office of Naval Intelligence Office of Naval Research U.S. Navy Pacific Fleet U.S. Navy Third Fleet Naval Air Systems Command Space and Naval Warfare Systems Command Combatant Commands Headquarters, U.S. Pacific Command Headquarters, U.S. Northern Command Headquarters, North American Aerospace Defense Command Headquarters, U.S. Southern Command Headquarters, Fleet Forces Command Joint Forces Component Command for Intelligence, Surveillance and Reconnaissance, U.S. Strategic Command The United States Coast Guard Headquarters District Five, Sector Hampton Roads District Eleven, Sector San Diego Intelligence Coordination Center Maritime Intelligence Fusion Center (Atlantic Area) Maritime Intelligence Fusion Center (Pacific Area) Joint Harbor Operations Center, Port of San Diego The Office of Global Maritime Situational Awareness / National Maritime Domain Awareness Coordination Office We conducted this performance audit primarily from June 2009 through November 2010, and coordinated with DOD from January to June 2011 to produce this public version of the prior, sensitive report issued in November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Davi M. D’Agostino, (202) 512-5431 or [email protected]. In addition to the contact named above, Joseph Kirschbaum (Assistant Director), Alisa Beyninson, Christy Bilardo, Stephen Caldwell, Gina Flacco, Brent Helt, Greg Marchand, Timothy Persons, Steven Putansu, Amie Steele, and Cheryl Weissman made key contributions to this report.
Maritime security threats to the United States are broad, including the naval forces of potential adversary nations, terrorism, and piracy. The attacks on the USS Cole in 2000, in Mumbai in 2008, and on the Maersk Alabama in 2009 highlight these very real threats. The Department of Defense (DOD) considers maritime domain awareness--that is, identifying threats and providing commanders with sufficient awareness to make timely decisions--a means for facilitating effective action in the maritime domain and critical to its homeland defense mission. GAO was asked to examine the extent to which DOD has developed a strategy to manage its maritime domain awareness efforts and uses a risk-based approach. GAO analyzed national and DOD documents; interviewed DOD and interagency maritime domain awareness officials; and conducted site visits to select facilities engaged in maritime related activities. This report is a public version of a previous, sensitive report.. DOD has identified numerous maritime capability gaps and developed documents that articulate a broad strategy for maritime domain awareness. However, DOD does not have a departmentwide strategy that adequately defines roles and responsibilities for addressing gaps, aligns objectives with national strategy, and includes measures to guide the implementation of maritime domain awareness efforts, and to assess and manage risk associated with capability gaps. GAO has previously reported that it is standard practice to have a strategy that lays out goals and objectives, suggests actions for addressing those objectives, allocates resources, identifies roles and responsibilities, and measures performance against objectives. DOD and its components have developed a number of documents that incorporate some of these key elements of an overall strategy for maritime domain awareness such as a definition of the problem. However, collectively they do not have several key elements a strategy should contain. For example, neither DOD's Maritime Domain Awareness Joint Integrating Concept nor the DOD's Executive Agent Assessment of U.S. Defense Components Annual Maritime Domain Awareness Plans fully address organizational roles and responsibilities and resources, investments, performance measures, and risk management. Additionally, DOD leverages numerous capabilities to collect, fuse, and share maritime information to respond to global maritime challenges. DOD components have identified and started prioritizing capability gaps; however, DOD does not have a departmentwide risk assessment to address high priority capability gaps. DOD combatant commands and components prioritize maritime domain awareness differently based upon their respective missions and these component-level views may not provide a full view of the risks associated with these gaps at a departmentwide level. Prior GAO work has emphasized the importance of using a comprehensive risk assessment process. A strategy that includes a comprehensive, risk-based approach to managing maritime domain awareness may provide better information to decision makers about the potential implications of policy and resourcing decisions both within DOD and across the interagency. In the absence of a departmentwide strategy, DOD may not be effectively managing its maritime domain awareness efforts. This report is a publicly releasable version of a previously issued, sensitive report. GAO recommends that DOD (1) develop and implement a strategy with objectives, roles, and responsibilities for maritime domain awareness, aligns with DOD's corporate process, identifies capability resourcing responsibilities, and includes performance measures; and (2) perform a comprehensive risk-based analysis, including prioritized capability gaps and future investments. DOD agreed with the recommendations.
The Workforce Investment Act created a new, comprehensive workforce investment system designed to change the way employment and training services are delivered. When WIA was enacted in 1998, it replaced the Job Training Partnership Act (JTPA) with three new programs—Adult, Dislocated Worker, and Youth—that allow for a broader range of services to the general public, no longer using income to determine eligibility for all program services. These new programs no longer focused exclusively on training, but provided for three tiers, or levels, of service for adults and dislocated workers: core, intensive, and training. Core services include basic services such as job searches and labor market information. These activities may be self-service or require some staff assistance. Intensive services include such activities as comprehensive assessment and case management, as well as classes in literacy, conflict resolution, work skills, and those leading to general equivalency diploma (GED)—activities that require greater staff involvement. Training services include such activities as occupational skills or on-the-job training. These tiers of WIA-funded services are provided sequentially. That is, in order to receive intensive services, job seekers must first demonstrate that core services alone will not lead to getting a job that will provide self-sufficiency. Similarly, to receive training services, a job seeker must show that core and intensive services will not lead to such a job. Unlike prior systems, WIA requires that individuals eligible for training under the Adult and Dislocated Worker Programs receive vouchers—called Individual Training Accounts—which they can use for the training provider and course offering of their choice, within certain limitations. In addition to establishing the three new programs, WIA requires that services for these programs, along with those of a number of other employment and training programs, be provided through a single service delivery system—the one-stop system. States were required to implement these changes by July 1, 2000. Sixteen categories of programs from four separate federal agencies must provide services through the system. (See table 1.) Each local area must have at least one comprehensive one-stop center where core services for all mandatory programs are accessible. WIA allows flexibility in the way these mandatory partners provide services through the one-stop system, allowing colocation, electronic linkages, or referrals to off-site partner programs. While WIA requires these mandatory partners to participate, it does not provide additional funds to operate one- stop systems and support one-stop partnerships. As a result, mandatory partners are expected to share the costs of developing and operating one- stop centers. In addition to mandatory partners, one-stop centers have the flexibility to include other partners in the one-stop system to better meet specific state and local workforce development needs. Services may also be provided at affiliated sites, defined as designated locations that provide access to at least one employment and training program. About $3.3 billion was appropriated in fiscal year 2006 for the three WIA programs—Adult, Dislocated Worker, and Youth. The formulas for distributing these funds to the states were left largely unchanged from those used to distribute funds under the predecessor program, JTPA, and are based on such factors as unemployment rates and the relative number of low-income adults and youth in the population. In order to receive their full funding allocations, states must report on the performance of their three WIA programs. WIA requires that performance measures gauge program results in the areas of job placement, retention, earnings, skill attainment and customer satisfaction, largely through the use of Unemployment Insurance (UI) wage records. Labor’s guidance requires that job seekers be tracked for outcomes when they begin receiving core services that require significant staff assistance. States are held accountable by Labor for their performance and may receive incentive funds or suffer financial sanctions based on whether they meet performance levels. WIA requires states to negotiate with Labor to establish expected performance levels for each measure. While WIA established performance measures for the three WIA-funded programs, it did not establish any comprehensive measures to assess the overall performance of the one-stop system. Seven years after the implementation of the workforce investment system under WIA, the system’s infrastructure continues to evolve. Nationwide, the number of comprehensive one-stop centers has decreased somewhat, but not uniformly across states. States generally reported increased availability of services for some of the mandatory programs at comprehensive one-stop centers. But despite WIA’s requirement that all mandatory partners provide services through the one-stop system, some states have maintained a completely separate system for delivering services for Wagner-Peyser-funded Employment Services (ES). Adults and dislocated workers receive a wide range of services through the one-stop system, but states and local areas have generally focused their youth services on in-school youth, finding it difficult to recruit and retain out-of- school youth. Most medium and large employers are aware of and use the system and are quite satisfied with its services, but they generally use one- stop centers to fill their needs for low-skilled workers. WIA’s service delivery infrastructure has continued to evolve since we last reviewed it in 2001. Over the 6-year period, nationwide, the number of one- stop centers—both comprehensive and satellite—has declined, a fact that states most often attributed to a decrease in funding. The number of comprehensive centers declined from a high of 1,756 in 2001 to 1,637 in 2007. However, this trend is not uniform across states. Ten states reported an increase in comprehensive centers over the last 4 years. For example, Montana reported a 600 percent increase in centers as part of a statewide restructuring of its one-stop delivery system that involved converting former satellite and affiliated sites into comprehensive one-stop centers. States that reported an increase in the number of comprehensive one-stop centers often cited a rise in demand for services as the reason for the increase. Services for mandatory programs are increasingly available through the one-stop system in 2007, though not always on-site. States continue to have services for two key programs—WIA Adult and Dislocated Workers—available on-site at the majority of the one-stop centers. In addition, 30 states reported that TANF services were generally available on-site at a typical comprehensive one-stop center, and 3 more states reported they were typically on-site at satellites. The on-site availability of some other programs—such as, Job Corps, Migrant and Seasonal Farmworkers, Senior Community Service and Employment Program, and Adult Education and Literacy—declined slightly between 2001 and 2007. However, the overall availability of these programs’ services increased, largely because of substantial increases in access through electronic linkages and referrals. Despite the increased availability of some programs at one-stop centers, some states have not fully integrated all of their Wagner-Peyser-funded Employment Service into the system. Six states reported in our 2007 survey that they operate stand-alone Employment Service offices, all completely outside the one-stop system. Another four states reported having at least some stand-alone offices outside the system (see fig. 1). At the same time, states that operate stand-alone offices also report providing services on-site at the majority of their one-stops. Labor has expressed concern that stand-alone Employment Service offices cause confusion for individuals and employers and promote duplication of effort and inefficient use of resources. Given the concern over resources, we asked states to provide estimates of their state’s total Employment Service allotment that was used to support the infrastructure of the stand-alone offices. Only 6 states could provide them, and the overall average was about 5 percent. However, the state with the most stand-alone ES offices reported that it had not used any of its ES allotment to support the infrastructure of these offices. Instead, this state financed the infrastructure costs of its 30 stand-alone offices with state general funds. Despite their concerns, Labor officials say that they lack the authority to prohibit stand-alone ES offices. While most states used multiple program funds to finance the operation of their one-stops, WIA and ES continue to be the two programs most often cited as funding sources used to cover one-stop infrastructure—or nonpersonnel—costs. In program year 2005, the most recent year for which data are available, 23 states reported that WIA was the top funding source used to support infrastructure, while 19 states identified the Employment Service. Of the eight states remaining, three cited TANF as the top funding source, two cited Unemployment Insurance, one cited WIA state funds, and two states could not provide this information. States reported less reliance on other programs to fund the one-stop infrastructure in 2005 than in the past (see table 2). For example, the number of states that reported using TANF funds at all to cover infrastructure costs declined from 36 to 27. WIA provides the flexibility to states and local areas to develop approaches for serving job seekers and employers that best meet local needs. In our work we have found some broad trends in services, but there continues to be wide variation across the country in the mix of services and how they are provided. Local areas use a substantial portion of their WIA funds to provide training to adults and dislocated workers, but use even more to provide the services that go beyond training, including case management, assessment, and supportive services. However, serving youth, particularly out-of-school youth, has proven challenging. WIA increased the focus on the employer as customer, and we found that most medium and large employers are aware of and use the one-stop. However, employers look to the one-stop system mostly to help fill their needs for low-skilled workers, in part because they assume that most workers available through the system are low-skilled. Services to adults and dislocated workers involve more than training. Despite early concerns about the extent of training, we found that substantial WIA funds were being used to fund training. Local boards used about 40 percent of the approximately $2.4 billion in WIA funds they had available in program year 2003 to provide training services to an estimated 416,000 WIA participants, primarily in occupational skills. However, the vast majority of job seekers receive self-assisted core services, not training. Not everyone needs or wants additional training. And even when they do, they need help deciding what type of training would best match their skill level while at the same time meet local labor market needs— help that includes information on job openings, comprehensive assessments, individual counseling, and supportive services, such as transportation and child care. Of the funds available in program year 2003, 60 percent was used to pay for these other program costs, as well as to cover the cost of administering the program. Providing services to youth has been challenging for local areas. Local areas often focus their WIA youth resources on serving in-school youth, often using a range of approaches to prevent academic failure and school dropouts. Out-of-school youth are viewed as difficult to serve, in part because they are difficult to locate in the community and they face particularly difficult barriers to employment and education, including low levels of academic attainment, limited work experience, and a scarcity of jobs in the community. The 5-year Youth Opportunity Grants program, authorized under WIA was designed, in part, to enhance the local infrastructure of youth services, particularly in high-poverty areas. Grantees offered participants a range of youth services—education, occupational skills training, leadership development, and support services. They set up centers that varied widely. To reach the hard-to-serve target population, grantees used a variety of recruiting techniques, ranging from the conventional to the innovative. For example, some grantees conducted community walking campaigns using staff to saturate shopping malls and other areas where youth congregate. Conditions in the communities such as violence and lack of jobs presented a challenge to most grantees, but they took advantage of the local discretion built into the program to develop strategies to address them. Grantees and others reported that the participants and their communities made progress toward the education and employment goals of the program. However, a formal assessment of the program’s impact, while under way, has not yet been released by Labor. Although Labor originally planned to continue to add grantees, funding for the program was eliminated in the budget for fiscal year 2004. Employers mostly use one-stop centers to fill their needs for low-skilled workers. Most medium and large employers are aware of and use the system and are satisfied with its services (see fig 2). Regardless of size, just over 70 percent of employers responding to our 2006 survey reported that they hired a small percentage of their employees—about 9 percent— through one-stops. Two-thirds of those they hired were low-skilled workers, in part because they thought the labor available from the one- stops was mostly low-skilled. Employers told us they would hire more job seekers from the one-stop labor pools if the job seekers had the skills for which they were looking. Most employers used the centers’ job posting service, fewer made use of the one-stops’ physical space or job applicant screening services. Still, when employers did take advantage of services, they generally reported that they were satisfied with the services and found them useful because they produced positive results and saved them time and money. When employers did not use a particular one-stop service, in most cases they said that they either were not aware that the one-stop provided the service, or said they obtained it elsewhere, or said that they carried through on their own. Despite the successes state and local officials have had since WIA’s implementation, some aspects of the law and other factors have hampered their efforts. First, funding issues continue to stymie the system. For example, the formulas in WIA that are used to allocate funds to states do not reflect current program design and have caused wide fluctuations in funding levels from year to year. In addition, Labor’s focus on expenditures without including obligations overestimates the amount of funds available to provide services at the local level. Second, the performance measurement system is flawed and little is known about what WIA has achieved. Labor has taken some steps to improve guidance and communication, but does not involve key stakeholders in the development of some major initiatives and provides too little time for states and local areas to implement them. As states and localities have implemented WIA, they have been hampered by funding issues, including statutory funding formulas that are flawed. As a result, states’ funding levels may not always be consistent with the actual demand for services. In previous work, we identified several issues associated with the current funding formulas. First, formula factors used to allocate funds are not aligned with the target populations for these programs. Second, allocations may not reflect current labor market conditions because there are time lags between when the data are collected and when the allocations become available to states. Third, the formula for the Dislocated Worker program is especially problematic, because it causes funding levels to suffer from excessive and unwarranted volatility unrelated to a state’s actual layoff activity. Several aspects of the Dislocated Worker formula contribute to funding volatility and to the seeming lack of consistency between dislocation and funding. The excess unemployment factor has a threshold effect—states may or may not qualify for the one-third of funds allocated under this factor in a given year, based on whether or not they meet the threshold condition of having at least 4.5 percent unemployment statewide. In a study we conducted in 2003, we compared dislocation activity and funding levels for several states. In one example, funding decreased in one year while dislocation activity increased by over 40 percent (see fig. 3). This volatility could be mitigated by provisions such as “hold harmless” and “stop gain” constraints that limit changes in funding to within a particular range of each state’s prior year allocation. The Adult formula includes such constraints, setting the hold harmless at 90 percent and the stop gain at 130 percent. In addition to issues related to funding allocation, the process used to determine states’ available funds considers only expenditures and does not take into account the role of obligations in the current program structure. Our analysis of Labor’s data from program year 2003 and beyond indicates that states are spending their WIA funds within the authorized 3-year period. Nationwide, states spent over 66 percent of their program year 2003 WIA funds in the first year—an increase from the 55 percent since our 2002 report. In fact, almost all program funds allocated in program year 2003 were spent by states within 2 years. By contrast, Labor’s estimate of expenditure rates suggests that states are not spending their funds as quickly because the estimate is based on all funds states currently have available—from older funds carried in from prior program years to those only recently distributed. Moreover, many of the remaining funds carried over may have already been obligated—or committed through contracts for goods and services for which a payment has not yet been made. When we examined recent national data on the amount of WIA funds states are carrying in from previous program years, we found that, overall, the amount of carryover funds is decreasing—from $1.4 billion into program year 2003 to $1.1 billion into program year 2005. One explanation for the decline may be that obligations are being converted to expenditures. In our 2002 report, we also noted that Labor’s data lacked consistent information on obligations because states were not all using the same definition for obligations in what they reported to Labor. Labor’s guidance was unclear and did not specify whether obligations made at the local level—the point at which services are delivered—should be included. We recommended that Labor clarify the guidance to standardize the reporting of obligations and use this guidance when estimating states’ available funds. Labor issued revised guidance in 2002, but continues to rely on expenditure data in establishing its estimates. In so doing, it overestimates the funds states have available to spend and ignores the role of obligations in the current workforce investment system. Labor’s Office of the Inspector General (OIG) recently concurred, noting that obligations provide a more useful measure for assessing states’ WIA funding status if obligations accurately reflect legally committed funds and are consistently reported. We have little information at a national level about what the workforce investment system under WIA achieves. Outcome data do not provide a complete picture of WIA services. The data reflect only a small portion of those who receive WIA services and contain no information on services to employers. Furthermore, WIA performance data are not comparable across states and localities, in part because of inconsistent policies in tracking participants for outcomes. In addition, the use of wage records to calculate outcomes is no longer consistent across states. Labor and states have made progress in measuring WIA performance in a number of areas, including Labor’s data validation initiative and the move to common measures. Labor’s proposed integrated data system holds promise in improving data reporting, but it is unclear whether it will be implemented as currently proposed. Furthermore, Labor has not yet conducted an impact evaluation, as required by WIA. WIA performance data do not include information on all customers receiving services. Currently Labor has only limited information on certain job seekers—those who use only self-services—and on employers. WIA excludes job seekers who receive core services that are self-service or informational in nature from being included in the performance information. Thus, only a small proportion of the job seeker population who receive services at one-stops are actually reflected in WIA outcome data, making it difficult to know what the overall program is achieving. Customers who use self-services are estimated to be the largest portion of those served under WIA. In a 2004 study, we reported that some estimates show only about 5.5 percent of the individuals who walked into a one-stop were actually registered for WIA and tracked for outcomes. Furthermore, Labor has limited information about employer involvement in the one-stop system. Although Labor measures employers’ satisfaction, this measure does not provide information on how employers use the system. Labor officials told us that they do not rely on this information for any purpose, and the information is too general for states and local areas to use. WIA performance data are not comparable across states and localities. Because not all job seekers are included in WIA’s outcome measures, states and local areas must decide when to begin tracking participants for outcomes—a decision that has led to outcome data that are not comparable across states and local areas. The guidance available to states at the time WIA was first implemented was open to interpretation in some key areas. For example, the guidance told states to register and track for outcomes all adults and dislocated workers who receive core services that require significant staff assistance, but states could decide what constituted significant staff assistance. As a result, states and local areas have differed on whom they track and for how long—sometimes beginning the process when participants receive core services, and at other times not until they receive more intensive services. We have recommended that Labor determine a standard point of registration and monitor states to ensure they comply. Labor has taken some actions, but registration remains an issue. Furthermore, data are not comparable because the availability of wage records to calculate outcomes is no longer consistent across states. UI wage records—the primary data source for tracking WIA performance— provide a fairly consistent national view of WIA performance. At the same time, UI wage records cannot be readily used to track job seekers who get jobs in other states unless states share data. The Wage Record Interchange System (WRIS) was developed to allow states to share UI wage records and account for job seekers who participate in one state’s employment programs but get jobs in another state. In recent years, all states but one participated in WRIS while it was operated by the nonprofit National Association of State Workforce Agencies. However, in July 2006, Labor assumed responsibility for administering WRIS, and many states have withdrawn, in part because of a perceived conflict of interest between Labor’s role in enforcing federal law and the states’ role in protecting the confidentiality of their data. As of March 2007, only 30 states were participating in the program, and it is unknown if and when the other states will enter the data-sharing agreement. As a result, performance information in almost half the states may not include employment outcomes for job seekers who found jobs outside the states in which they received services. Labor has taken steps to address issues related to the quality of WIA performance data, but further action is needed. Both Labor’s OIG and our early studies of WIA raised issues on the quality of the performance data, and Labor has taken steps aimed at addressing these issues. In October 2004, Labor began requiring states to implement new data validation procedures for WIA performance data. This process requires states to conduct two types of validation: (1) data element validation—reviewing samples of WIA participant files, and (2) report validation—assessing whether states’ software accurately calculated performance outcomes. While it is too soon to fully assess whether Labor’s efforts have improved data quality, officials in most states have reported that Labor’s new requirements have helped increase awareness of data accuracy and reliability at both the state and local levels. In addition, in 2005, in response to an Office of Management and Budget (OMB) initiative, Labor began requiring states to implement a common set of performance measures for its employment and training programs, including WIA. These measures include an entered employment rate, an employment retention rate, and an average earnings measure. Moving to the common measures has increased the comparability of outcome information across programs and made it easier for states and local areas to collect and report performance information across the full range of programs that provide services in the one-stop system. In addition, as part of the implementation of the common measures, states are for the first time required to collect and report a count of all WIA participants who use one-stop centers. This may help provide a more complete picture of the one-stop system. The shift to common measures could also affect services to some groups of job seekers. Historically, certain WIA performance measures—primarily the earnings measure—have driven localities to serve only those customers who will help meet performance levels. For example, program providers have reported that the earnings measure provides a disincentive to enroll older workers in the program because of employment characteristics that may negatively affect program performance. In several local areas we visited for our study of older worker services, officials said they considered performance measures a barrier to enrolling older workers seeking part-time jobs because they would have lower earnings and therefore reduce measured program performance. Labor’s shift from earnings gain to average earnings under the common measures may help reduce the extent to which the measures are a disincentive to serve certain populations. It remains unclear, however, how the new measure will affect the delivery of services to some groups, such as older workers, who are more likely to work part-time and have lower overall wages. Further action may be needed to help reduce the incentive to serve only those who will help meet performance levels. One approach that could help would be to systematically adjust expected performance levels to account for different populations and local economic conditions when negotiating performance. We have made such a recommendation to Labor, but little action has been taken. The Workforce Investment Streamlined Performance Reporting System (WISPR). Since 2004, Labor has been planning to implement an integrated data-reporting system that could greatly enhance the understanding of job seeker services and outcomes. WISPR represents a promising step forward in integrating and expanding program reporting, but it is unclear whether implementation will occur as proposed. If implemented, the system would integrate data reporting by using standardized reporting requirements across the Employment Service, WIA, veterans’ state grant, and Trade Adjustment Assistance programs, and ultimately replace their existing reporting systems with a single reporting structure. Its integrated design would, for the first time, allow Labor and states to track an individual’s progress through the one-stop system. In addition, the system would expand data collection and reporting in two key areas: the services provided to employers and estimates of the number of people who access the one-stop system but ultimately receive limited or no services from one- stop staff. On the basis of our preliminary review, WISPR appears to address many of the issues we’ve raised regarding the system’s current performance data. However, concerns have been raised about challenges in implementing the new system, and at present, the timeline for WISPR’s implementation remains unclear. Given the rapidly approaching July 1, 2007, implementation date, it appears likely that implementation will be delayed. No information exists on what works and for whom. Although Labor has improved its outcome data on job seekers who participate in its programs, these data alone cannot measure whether outcomes are a direct result of program participation, rather than external factors. For example, local labor market conditions may affect an individual’s ability to find a job as much as or more than participation in an employment and training program. To measure the effects of a program, it is necessary to conduct an impact evaluation that would seek to assess whether the program itself led to participant outcomes. Since the full implementation of WIA in 2000—in which the one-stop system became the required means to provide most employment and training services—Labor has not made evaluating the impact of those services a research priority. While WIA required such an evaluation by 2005, Labor has declined to fund one in prior budgets. In 2004, we recommended that Labor comply with the requirements of WIA and conduct an impact evaluation of WIA services to better understand what services are most effective for improving outcomes. In response, Labor cited the need for program stability and proposed delaying an impact evaluation of WIA until after reauthorization. In its 2008 budget proposal, Labor identified an assessment of WIA’s impact on employment, retention, and earnings outcomes for participants as an effort the agency would begin. As of May 2007, according to Labor officials, the agency had not yet begun to design the study. Labor has implemented some initiatives, such as national performance and reporting summits, to better communicate with states on changes in processes and procedures. However, guidance on policy changes has often come too late for states to be able to implement them. For example, in implementing common measures, states had very little time to make the necessary changes before they had to begin data collection and reporting using the new requirements. While Labor publicized its plans to adopt the common measures, states were notified only in late February 2005 that Labor planned to implement changes on July 1, 2005, and final guidance was not issued until April 15, 2005. This gave states 3 months or less to interpret federal guidance, coordinate with partners, modify information technology systems, issue new guidance, and train local area staff. In our 2005 report, we commented that rushed implementation could negatively affect data quality and compromise the potential benefits of the proposed changes. In addition to underestimating the cost, time, and effort required of states to make such changes, Labor has failed to solicit adequate stakeholder input when introducing some major new initiatives. For example, Labor’s efforts to implement an integrated reporting system have been hampered by a lack of stakeholder input. In 2004, Labor first proposed a single, streamlined reporting system, known as the ETA Management Information and Longitudinal Evaluation system (EMILE) that would have replaced reporting systems for several Labor programs. While many states supported streamlined reporting, 36 states indicated that implementing the EMILE system, as proposed, would be very burdensome. Labor developed the system with only limited consultation with key stakeholders, including state officials, and as a result underestimated the magnitude and type of changes EMILE would require and the resources states would need in order to implement it. In response, Labor substantially modified this system’s design. The modified system, now called WISPR, was set to be implemented on July 1, 2007. As with EMILE, however, concerns have been raised about challenges in implementing the new system, particularly the early implementation date. Some comments to OMB expressed the view that Labor had again underestimated the time states would need to revise policy, reprogram systems, and retrain staff. Given the rapidly approaching deadline and states’ readiness to implement this system, it seems that this important initiative will likely be delayed again. In 2005, we recommended that Labor consider alternative approaches that involve ongoing consultation with key stakeholders as the agency seeks to implement its new initiatives. In the 7 years since most states fully implemented WIA, much progress has been made in developing and implementing a universal system. With notable exceptions, services for partner programs are becoming increasingly available through the one-stop system. States and local areas have used the flexibility under WIA to tailor services for where they are and for whom they serve. As the Congress moves toward reauthorizing WIA, consideration should be given to maintaining that state and local flexibility, whereby innovation and system ownership can be fostered. However, some aspects of WIA could be improved through legislative action. Our findings highlight two key areas: Improving the data on people who use the system: Requiring all job seekers who receive WIA funded services to be included in the performance management system would improve understanding of who gets served and eliminate the ambiguity about who should be tracked and for how long. Improving funding stability: If Congress chooses not to make broader funding formula changes, reducing the volatility in the Dislocated Worker allocation by requiring the use of hold harmless and stop gain provisions in the formula would help stabilize funding and better foster sound financial practices. Furthermore, we have made a number of recommendations to Labor to improve aspects of the current program. While Labor has implemented many of them, several key concerns remain unaddressed. Labor has not taken steps to more accurately estimate states’ available fund by considering obligations as well as expenditures, establish suitable performance levels for states to achieve by developing and implementing a systematic approach for adjusting expected performance to account for different populations and local economic conditions, maximize the likelihood that new initiatives will be adopted in an achievable time frame by using a collaborative approach that engages all key stakeholders, and improve policymakers’ understanding of what employment and training programs achieve by conducting important program evaluations, including an impact study on WIA, and releasing those findings in a timely way. In absence of actions by Labor on these issues, the Congress may wish to address them legislatively. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other members of the committee may have at this time. For information regarding this testimony, please contact Sigurd R. Nilsen, Director, Education, Workforce, and Income Security Issues, at (202) 512-7215. Individuals who made key contributions to this testimony include Dianne Blank, Rebecca Woiwode, and Thomas McCabe. Veterans’ Employment and Training Service: Labor Could Improve Information on Reemployment Services, Outcomes, and Program Impact. GAO-07-594. Washington, D.C.: May 24, 2007. Workforce Investment Act: Employers Found One-Stop Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. National Emergency Grants: Labor Has Improved Its Grant Award Timeliness and Data Collection, but Further Steps Can Improve Process. GAO-06-870. Washington, D.C.: September 5, 2006. Trade Adjustment Assistance: Most Workers in Five Layoffs Received Services, but Better Outreach Needed on New Benefits. GAO-06-43. Washington, D.C.: January 31, 2006. Youth Opportunity Grants: Lessons Can Be Learned from Program, but Labor Needs to Make Data Available. GAO-06-53. Washington, D.C.: December 9, 2005. Workforce Investment Act: Labor and States Have Taken Actions to Improve Data Quality, but Additional Steps Are Needed. GAO-06-82. Washington, D.C.: November 14, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Unemployment Insurance: Better Data Needed to Assess Reemployment Services to Claimants. GAO-05-413. Washington, D.C.: June 24, 2005. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539. Washington, D.C.: May 27, 2005. Workforce Investment Act: Employers Are Aware of, Using, and Satisfied with One-Stop Services, but More Data Could Help Labor Better Address Employers’ Needs. GAO-05-259. Washington, D.C.: February 18, 2005. Workforce Investment Act: Labor Has Taken Several Actions to Facilitate Access to One-Stops for Persons with Disabilities, but These Efforts May Not Be Sufficient. GAO-05-54. Washington, D.C.: December 14, 2004. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. National Emergency Grants: Labor Is Instituting Changes to Improve Award Process, but Further Actions Are Required to Expedite Grant Awards and Improve Data. GAO-04-496. Washington, D.C.: April 16, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: February 23, 2004. Workforce Training: Almost Half of States Fund Worker Training and Employment through Employer Taxes and Most Coordinate with Federally Funded Programs. GAO-04-282. Washington, D.C.: February 13, 2004. Workforce Investment Act: Potential Effects of Alternative Formulas on State Allocations. GAO-03-1043. Washington, D.C.: August 28, 2003. Workforce Investment Act: Exemplary One-Stops Devised Strategies to Strengthen Services, but Challenges Remain for Reauthorization. GAO-03-884T. Washington, D.C.: June 18, 2003. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Workforce Investment Act: Issues Related to Allocation Formulas for Youth, Adults, and Dislocated Workers. GAO-03-636. Washington, D.C.: April 25, 2003. Workforce Training: Employed Worker Programs Focus on Business Needs, but Revised Performance Measures Could Improve Access for Some Workers. GAO-03-353. Washington, D.C.: February 14, 2003. Older Workers: Employment Assistance Focuses on Subsidized Jobs and Job Search, but Revised Performance Measures Could Improve Access to Other Services. GAO-03-350. Washington, D.C.: January 24, 2003. Workforce Investment Act: States’ Spending Is on Track, but Better Guidance Would Improve Financial Reporting. GAO-03-239. Washington, D.C.: November 22, 2002. Workforce Investment Act: States and Localities Increasingly Coordinate Services for TANF Clients, but Better Information Needed on Effective Approaches. GAO-02-696. Washington, D.C.: July 3, 2002. Workforce Investment Act: Youth Provisions Promote New Service Strategies, but Additional Guidance Would Enhance Program Development. GAO-02-413. Washington, D.C.: April 5, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: February 11, 2002. Workforce Investment Act: Improvements Needed in Performance Measures to Provide a More Accurate Picture of WIA’s Effectiveness. GAO-02-275. Washington, D.C.: February 1, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns over New Requirements. GAO-02-72. Washington, D.C.: Oct. 4, 2001. Also testimony GAO-02-94T. Workforce Investment Act: Implementation Status and the Integration of TANF Services. GAO/T-HEHS-00-145. Washington, D.C.: June 29, 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the Workforce Investment Act's (WIA) enactment in 1998, GAO has issued numerous reports that included recommendations regarding many aspects of WIA, including performance measures and accountability, funding formulas and spending, one-stop centers, and training, as well as services provided to specific populations, such as dislocated workers, youth, and employers. Collectively, these studies employed an array of data collection techniques, including surveys to state and local workforce officials and private sector employers; site visits; interviews with local, state, and Labor officials; and analysis of Labor data and documents. This testimony draws upon the results of these reports, issued between 2000 and 2007, as well as GAO's ongoing work on one-stop infrastructure, and discusses issues raised and recommendations made. Specifically, the testimony addresses (1) progress made by federal, state, and local officials in implementing key provisions of WIA; and (2) challenges that remain in implementing an integrated employment and training system. Seven years after implementing the workforce investment system under WIA, the system's infrastructure continues to evolve. Nationwide, the number of comprehensive one-stop centers has decreased somewhat, but not uniformly across states. States generally reported increased availability of services for some of the mandatory programs at comprehensive one-stop centers. However, despite WIA's requirement that all mandatory partners provide services through the one-stop system, some states have maintained a completely separate system for delivering services for Wagner-Peyser-funded Employment Services. Adults and dislocated workers receive a wide range of services through the one-stop system. Local areas used about 40 percent of their WIA funds in 2003 to provide training services to an estimated 416,000 participants, but the vast majority of job seekers receive services other than training. States and local areas have generally focused their youth services on in-school youth and have found it difficult to recruit and retain out-of-school youth. Most medium and large employers are aware of and use the system and are quite satisfied with its services, but they generally use one-stop centers to fill their needs for low-skilled workers. Despite the successes state and local officials have had since WIA's implementation, some aspects of the law and other factors have hampered their efforts. Funding issues continue to hamper the system. WIA's formulas that are used to allocate funds to states do not reflect current program design and have caused wide fluctuations in funding levels from year to year that do not reflect actual layoff activity. In addition, Labor's focus on expenditures without including obligations overestimates the amount of funds available to provide services at the local level. Moreover, little is known about what the system is achieving because only a small minority of participants are captured in the performance measures, and Labor has not conducted an impact study to assess the effectiveness of the one-stop system, as required under WIA. Labor has taken some steps to improve guidance and communication, but does not involve key stakeholders in the development of some major initiatives and provides too little time for states and local areas to implement them. We are suggesting that Congress consider taking steps to improve the stability of the funding and enhance the data available on people who use the system. In addition, in our past work, we have recommended that Labor use obligations when estimating states' available funds, that it comply with the requirements of WIA and conduct an impact evaluation, and that it consider alternative approaches in implementing new initiatives that involve ongoing consultation with key stakeholders. Labor has taken little action on these recommendations.
Federal operations and facilities have been disrupted by a range of events, including the terrorist attacks on September 11, 2001; the Oklahoma City bombing; localized shutdowns due to severe weather conditions, such as the closure of federal offices in Denver for 3 days in March 2003 due to snow; and building-level events, such as asbestos contamination at the Department of the Interior’s headquarters. Such disruptions, particularly if prolonged, can lead to interruptions in essential government services. Prudent management, therefore, requires that federal agencies develop plans for dealing with emergency situations, including maintaining services, ensuring proper authority for government actions, and protecting vital assets. Until relatively recently, continuity planning was generally the responsibility of individual agencies. In October 1998, PDD 67 identified FEMA—which is responsible for responding to, planning for, recovering from, and mitigating against disasters—as the executive agent for federal COOP planning across the federal executive branch. FEMA was an independent agency until March 2003, when it became part of the Department of Homeland Security, reporting to the Under Secretary for Emergency Preparedness and Response. PDD 67 is a Top Secret document controlled by the National Security Council. FPC 65 states that PDD 67 made FEMA, as executive agent for COOP, responsible for formulating guidance for agencies to use in developing viable plans; coordinating interagency exercises and facilitating interagency coordination, as appropriate; and overseeing and assessing the status of COOP capabilities across the executive branch. According to FEMA officials, PDD 67 also required that agencies have COOP plans in place by October 1999. In July 1999, FEMA issued FPC 65 to assist agencies in meeting the October 1999 deadline. FPC 65 states that COOP planning should address any emergency or situation that could disrupt normal operations, including localized emergencies. FPC 65 also determined that COOP planning is based first on the identification of essential functions—that is, those functions that enable agencies to provide vital services, exercise civil authority, maintain safety, and sustain the economy during an emergency. FPC 65 gives no criteria for identifying essential functions beyond this definition. Although FPC 65 gives no specific criteria for identifying essential functions, a logical starting point for this process would be to consider programs that had been previously identified as important. For example, in March 1999, as part of the efforts to address the Y2K computer problem, the Director of OMB identified 42 programs with a high impact on the public: Of these 42 programs, 38 were the responsibility of the 23 major departments and agencies that we reviewed. (App. III provides a list of these 38 high-impact programs and the component agencies that are responsible for them.) Of these 23 major departments and agencies, 16 were responsible for at least one high-impact program; several were responsible for more than one. Programs that were identified included weather service, disease monitoring and warnings, public housing, air traffic control, food stamps, and Social Security benefits. These programs, as well as the others listed in appendix III, continue to perform important functions for the public. The Y2K planning to support these high-impact programs included COOP planning and specifically addressed interdependencies. Planning included identifying partners integral to program delivery, testing data exchanges across partners, developing complementary business continuity and contingency plans, sharing key information on readiness with other partners and the public, and taking other steps to ensure that the agency’s high-impact program would work in the event of an emergency. Although the identification of essential functions was established as the first step in COOP planning, FPC 65 also identified an additional seven other planning topics that make up a viable COOP capability. The guidance provided a general definition of each of the eight topics and identified several actions that should be completed to address each topic. Table 1 lists the eight topic areas covered in FPC 65 and provides an example of an action under each. The identification of essential functions is a prerequisite for COOP planning because it establishes the planning parameters that drive the agency’s efforts in all other planning topics. For example, FPC 65 directs agencies to identify alternative facilities, staff, and resources necessary to support continuation of their essential functions. The effectiveness of the plan as a whole and the implementation of all other elements depend on the performance of this step. Of the 34 agency COOP plans we reviewed, 29 plans included at least one function that was identified as essential. These agency-identified essential functions varied in number and scope. The number of functions identified in each plan ranged from 3 to 399. In addition, the apparent importance of the functions was not consistent. For example, a number of essential functions were of clear importance, such as “ensuring uninterrupted command, control, and leadership of the “protecting critical facilities, systems, equipment and records”; and “continuing to pay the government’s obligations.” Other identified functions appeared vague or of questionable importance: “provide speeches and articles for the Secretary and Deputy Secretary”; “schedule all activities of the Secretary”; and “review fiscal and programmatic integrity and efficiency of Departmental activities.” In contrast to the examples just given, agencies did not list among their essential functions 20 of the 38 “high-impact” programs identified during the Y2K effort at the agencies we reviewed. Another important consideration in identifying essential functions is the assessment of interdependencies among functions and organizations. As we have previously reported, many agency functions rely on the availability of resources or functions controlled by another organization, including other agencies, state and local governments, and private entities. (For example, the Department of the Treasury’s Financial Management Service receives and makes payments for most federal agencies.) The identification of such interdependencies continues to be essential to the related areas of information security and critical infrastructure protection. Although FPC 65 does not use the term “interdependencies,” it directs agencies to “integrate supporting activities to ensure that essential functions can be performed.” Of the 34 plans we reviewed, 19 showed no evidence of an effort to identify interdependencies and link them to essential functions, which is a prerequisite to developing plans and procedures to support these functions and all other elements of COOP planning. Nine plans identified some key partners, but appeared to have excluded others: for instance, six agencies either make or collect payments, but did not mention the role of the Treasury Department in their COOP plans. The high level of generality in FEMA’s guidance on essential functions contributed to the inconsistencies in agencies’ identification of these functions. In its initial guidance, FPC 65, FEMA provided minimal criteria for agencies to make these identifications, giving a brief definition only. According to FEMA officials, the agency is currently developing revised COOP guidance that will provide more specific direction on identifying essential functions. According to these officials, FEMA expects to release the revised guidance in March 2004. Further, although FEMA conducted several assessments of agency COOP planning between 1995 and 2001, none of these addressed the identification of essential functions. In addition, FEMA has begun development of a system to collect data from agencies on the readiness of their COOP plans, but FEMA officials told us that they will not use the system to validate the essential functions identified by each agency or their interdependencies. According to FEMA officials, the agencies are better able to make those determinations. However, especially in view of the wide variance in number and importance of functions identified, as well as omissions of high-impact programs, the lack of FEMA review lowers the level of assurance that the essential functions that have been identified are appropriate. Additionally, in its oversight role, FEMA had the opportunity to help agencies refine their essential functions through an interagency COOP test or exercise. According to FPC 65, FEMA is responsible for coordinating such exercises. FEMA is developing a test and training program for COOP activities, but it has not yet conducted an interagency exercise to test the feasibility of these planned activities. FEMA had planned a governmentwide exercise in 2002, but the exercise was cancelled after the September 11 attacks. FEMA is currently preparing to conduct a governmentwide exercise in mid-May 2004. Improper identification of essential functions can have a negative impact on the entire COOP plan, because other aspects of the COOP plan are designed around supporting these functions. If an agency fails to identify a function as essential, it will not make the necessary arrangements to perform that function. If it identifies too many functions as essential, it risks being unable to adequately address all of them. In either case, the agency increases the risk that it will not be able to perform its essential functions in an emergency. As of October 1, 2002, almost 3 years after the planning deadline established by PDD 67, 3 of the agencies we reviewed had not developed and documented a COOP plan. The remaining 20 major federal civilian agencies had COOP plans in place, and the 15 components that we reviewed also had plans. (App. IV identifies the 15 components and the high-impact programs for which they are responsible.) However, none of these plans addressed all the guidance in FPC 65. Of the eight topic areas identified in FPC 65, these 34 COOP plans generally complied with the guidance in one area (developing plans and procedures); generally did not comply in one area (tests, training, and exercises); and showed mixed compliance in the other six areas. The following sections present the results of our analysis for each of the eight planning topics outlined in FPC 65. In analyzing each plan, we looked for the answers to a series of questions regarding each planning topic. We present the compiled results for each topic in the form of a table showing the answers to these questions. Appendix I provides more detail on our analysis and methods. Although most agency plans identified at least one essential function, less than half the COOP plans fully addressed other FPC 65 guidance related to essential functions, such as prioritizing the functions or identifying interdependencies among them (see table 2). If agencies do not prioritize their essential functions and identify the resources that are necessary to accomplish them, their COOP plans will not be effective, since the other seven topics of the COOP plan are designed around supporting these functions. FPC 65 calls for COOP plans to be developed and documented that provide for the performance of essential functions under all circumstances. Most agency COOP documents included the basic information outlined in FPC 65 (see table 3). However, in those cases where plans and procedures are not adequately documented, agency personnel may not know what to do in an emergency. Orders of succession ensure continuity by identifying individuals who are authorized to act for agency officials in case those officials are unavailable. Although most agency COOP documents adequately described the order of succession to the agency head and described orders of succession by position or title, fewer addressed other succession planning procedures outlined in FPC 65 (see table 4). If orders of succession are not clearly established, agency personnel may not know who has authority and responsibility if agency leadership is incapacitated in an emergency. To provide for rapid response to emergencies, FPC 65 calls for agencies to delegate authorities in advance for making policy determinations at all levels. Generally, these delegations define what actions those individuals identified in the orders of succession can take in emergencies. Few agency COOP documents adequately described the agency’s delegations of authority (see table 5). If delegations of authority are not clearly established, agency personnel may not know who has authority to make key decisions in an emergency. Alternate facilities provide a physical location from which to conduct essential functions if the agency’s existing facilities are unavailable. Most agency COOP plans document the acquisition of at least one alternate facility for use in emergencies, but few of those plans demonstrate that the facilities are capable of meeting the agencies’ emergency operating requirements (see table 6). If alternate facilities are not provided or are inadequate, agency operations may not be able to continue in an emergency. The success of agency operations at an alternate facility depends on available and redundant communications with internal organizations, other agencies, critical customers, and the public. Most COOP documents identified some redundant emergency communications capabilities, but few included contact information that would be necessary to use those capabilities in an emergency (see table 7). If communications fail in an emergency, essential agency operations may not be possible. FPC 65 states that agency personnel must have access to and be able to use the electronic and hard-copy records and information systems that are needed to perform their essential functions. About 24 percent of the COOP plans fully identified agencies’ vital paper and electronic records, while fewer documented the procedures for protecting or updating them (see table 8). If agency personnel cannot access and use up-to-date vital records, they may be unable to carry out essential functions. Tests, training, and exercises of COOP capabilities are essential to demonstrate and improve agencies’ abilities to execute their plans. Few agencies have documented that they have conducted tests, training, and exercises at the recommended frequency (see table 9). If emergency procedures are not tested and staff is not trained in their use, planned responses to an emergency may not be adequate to continue essential functions. The lack of compliance shown by many COOP plans can be largely attributed to FEMA’s limited guidance and oversight of executive branch COOP planning. First, FEMA has issued little guidance to assist agencies in developing plans that address the goals of FPC 65. Following FPC 65, FEMA issued more detailed guidance in April 2001 on two of FPC 65’s eight topic areas: FPC 66 provides guidance on developing viable test, training, and exercise programs, and FPC 67 provides guidance for acquiring alternate facilities. However, FEMA did not produce any detailed guidance on the other six topic areas. In October 2003, FEMA began working with several members of the interagency COOP working group to revise FPC 65. FEMA officials expect this revised guidance, which was still under development as of January 2004, to incorporate the guidance from the previous FPCs and to address more specifically what agencies need to do to comply with the guidance. Second, as part of FEMA’s oversight responsibilities, its Office of National Security Coordination is tasked with conducting comprehensive assessments of the federal executive branch COOP programs. With the assistance of contractors, the office has performed assessments, on an irregular schedule, of federal agencies’ emergency planning capabilities: In 1995, FEMA performed a survey of agency officials (this assessment predated FPC 65). In 1999, FEMA assessed compliance with the elements of FPC 65 through a self-reported survey of agency COOP officials, supplemented by interviews. In 2001, FEMA surveyed agency officials to ask, among other things, about actions that agencies took on and immediately after September 11, 2001. Of these three assessments, only the 1999 assessment evaluated compliance with the elements of FPC 65. Following this assessment, FEMA gave agencies feedback on ways to improve their respective COOP plans, and it made general recommendations, not specific to individual agencies, that addressed programwide problems. However, FEMA did not then follow up to determine whether individual agencies made improvements in response to its feedback and general recommendations. Besides inquiring about actions in response to the September 2001 attacks, the 2001 assessment was designed to provide an update on programwide problems that had been identified in the assessments of 1995 and 1999. It did not address whether individual agency COOP plans had been revised to correct previously identified deficiencies, nor did FEMA provide specific feedback to individual agencies. According to FEMA officials, the system it is developing to collect agency- reported data on COOP plan readiness will improve FEMA’s oversight. The system is based on a database of information provided by agencies for the purpose of determining if they are prepared to exercise their COOP plans, in part by assessing compliance with FPC 65. However, according to FEMA officials, while they recognize the need for some type of verification, FEMA has not yet determined a method of verifying these data. Without regular assessments of COOP plans that evaluate individual plans for adequacy, FEMA will not be able to provide information to help agencies improve their COOP plans. Further, if FEMA does not verify the data provided by the agencies or follow up to determine whether agencies have improved their plans in response to such assessments, it will have no assurance that agencies’ emergency procedures are appropriate. FEMA officials attributed the limited level of oversight that we found to two factors. First, they stated that before its transition to the Department of Homeland Security, the agency did not have the legal or budgetary authority to conduct more active oversight of the COOP activities of other agencies. However, FPC 65 states that PDD 67 made the agency responsible for guidance, coordination, and oversight in this area, in addition to requiring agencies to develop COOP plans. Accordingly, although it cannot determine how agencies budget resources for such planning, it does have the authority to oversee this planning. Second, according to these officials, until last year, the agency devoted roughly 13 staff to COOP guidance, coordination, and oversight, as well as the development of FEMA’s own COOP plan. According to the official responsible for COOP oversight, the agency now has 42 positions authorized for COOP activities, 31 of which were filled as of December 31, 2003. The agency expects to fill another 4 positions in fiscal year 2004. While most of the federal agencies we reviewed had developed COOP plans, three agencies did not have documented plans as of October 2002. Those plans that were in place exhibited weaknesses in the form of widely varying determinations about what functions are essential and inconsistent compliance with guidance that defines a viable COOP capability. The weaknesses that we identified could cause the agencies to experience difficulties in delivering key services to citizens in the aftermath of an emergency. A significant factor contributing to this condition is FEMA’s limited efforts to fulfill its responsibilities first by providing guidance to help agencies develop effective plans and then by assessing those plans. Further, FEMA has done very little to help agencies identify those functions that are truly essential or to identify and plan for interdependencies among agency functions. FEMA has begun taking steps to improve its oversight, by developing more specific guidance and a system to track agency-provided COOP readiness information, and it is planning a governmentwide exercise. However, although the proposed guidance and exercise may help agencies improve their plans, the system that FEMA is developing to collect data on COOP readiness is weakened by a lack of planning to verify agency-submitted data, validate agency-identified essential functions, or identify interdependencies with other activities. Without this level of active oversight, continuity planning efforts will continue to fall short and increase the risk that the public will not be able to rely upon the continued delivery of essential government programs and services following an emergency. We are making three recommendations to enhance the ability of the executive branch to continue to provide essential services during emergencies. To ensure that agencies can continue operations in emergencies and are prepared for the governmentwide exercise planned for May 2004, we recommend that the Secretary of Homeland Security direct the Under Secretary for Emergency Preparedness and Response to take steps to ensure that agencies that do not have COOP plans develop them by May 1, 2004. We further recommend that the Secretary direct the Under Secretary to take steps to improve the oversight of COOP planning by ensuring that agencies correct the deficiencies in individual COOP plans identified here, as well as those identified in previous assessments, and conducting assessments of agency continuity plans that include independent verification of agency-provided information, as well as an assessment of the essential functions identified and their interdependencies with other activities. In written comments on a draft of this report, which are reprinted in appendix V, the Under Secretary for Emergency Preparedness and Response agreed that better COOP planning is needed to ensure delivery of essential services, and that FEMA could do more to improve COOP planning. He added that the agency has begun to correct the identified deficiencies and stated that the federal government is currently poised to provide services in an emergency. The Under Secretary’s commitment to improve FEMA’s oversight of COOP planning can be instrumental in ensuring that agencies prepare adequate plans. Specifically, once FEMA ensures that each agency has a COOP plan, ensures that agencies correct the identified deficiencies in existing plans, and conducts independent verification and assessments of those plans, it will be in a position to effectively demonstrate the readiness of federal agencies to respond to emergencies. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Subcommittee on Homeland Security, House Committee on Appropriations; Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform; and the Subcommittee on Oversight of Government Management, the Federal Workforce, and the District of Columbia, Senate Committee on Governmental Affairs. We are also sending copies to the Secretary of Homeland Security. We will also make copies available on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions on matters contained in this report, please contact me at (202) 512-6240 or by e-mail at [email protected]. Other key contributors to this report were Barbara Collier, Mirko Dolak, Neela Lakhmani, Susan Sato, James R. Sweetman, Jr., Jessie Thomas, and Marcia Washington. To accomplish our objectives, we obtained and evaluated headquarters contingency plans that were in place as of October 1, 2002, from 20 of the 23 largest civilian departments and agencies (listed in app. II). We also obtained and evaluated 14 plans covering 15 components of civilian cabinet-level departments, selected because these components were responsible for a program previously deemed high impact by the Office of Management and Budget (OMB). (App. III lists these components and the high-impact programs.) We also interviewed agency officials who were responsible for developing each of the 34 continuity of operations (COOP) plans (comprising the 20 plans for the largest civilian departments and agencies and the 14 plans covering components with high-impact programs); obtained and analyzed COOP guidance issued by the Federal Emergency Management Agency (FEMA) and documents describing its efforts to provide oversight and assessments of federal COOP planning efforts; and conducted interviews with FEMA officials to clarify the activities described in these documents. To assess the adequacy of agency-identified essential functions, we analyzed the COOP plans from agencies that were responsible for programs that OMB designated as having high impact to determine whether the plans described how those programs would continue to function during an emergency, and we assessed COOP documentation for evidence of agency efforts to identify interdependencies between their essential functions and functions or resources controlled by others. For example, for those agencies responsible for processing incoming or outgoing payments, we looked for evidence that the agency had identified services provided by the Department of the Treasury as necessary to the continuation of its functions. To assess how well agency plans followed Federal Preparedness Circular (FPC) 65, we analyzed the guidance and identified 34 yes/no questions, grouped by the eight topic areas identified in FPC 65. Each topic area included two to eight questions. On the basis of the agency contingency planning documents, we used content analysis to assign an answer of “yes” (compliant), “no” (not compliant), or “partially” to these 34 questions. Documents were reviewed and compared independently by several of our analysts. The analysts then met to compare their assessments and reach a consensus assessment. We shared these initial assessments with each agency during structured interviews, giving agency officials the opportunity to provide additional documentation to demonstrate compliance. Any supplemental information provided by the agencies was again reviewed by multiple analysts, first independently and then jointly. From this analysis, we created the summary tables that appear in this report (tables 2 to 9) to compare answers across agencies. We requested that the National Security Council provide a copy of Presidential Decision Directive (PDD) 67, which lays out the policy guidance for executive branch contingency planning and describes the authority granted to FEMA and other agencies. To date, we have not received a copy. Instead, we relied on the characterization of PDD 67 in FPC 65 and on statements from FEMA officials on the requirements within PDD 67. Without a copy of PDD 67, we were unable to verify the responsibilities or scope of authority of the various executive branch entities responsible for contingency planning. We conducted our review between April 2002 and January 2004, in accordance with generally accepted government auditing standards. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
To ensure that essential government services are available in emergencies--such as terrorist attacks, severe weather, or building-level emergencies-- federal agencies are required to develop continuity of operations (COOP) plans. Responsibility for formulating guidance on these plans and for assessing executive branch COOP capabilities lies with the Federal Emergency Management Agency (FEMA), under the Department of Homeland Security. FEMA guidance, Federal Preparedness Circular (FPC) 65 (July 1999), provides elements of a viable COOP capability, including the requirement that agencies identify their essential functions. GAO was asked to determine the extent to which (1) major civilian executive branch agencies have identified their essential functions and (2) these agencies' COOP plans follow FEMA guidance. From an assessment of 34 COOP plans against FEMA guidance, GAO found that most agencies' plans identified at least one function as essential. However, the functions identified in each plan varied widely in number-- ranging from 3 to 399--and included functions that appeared to be of secondary importance, while at the same time omitting programs that had been previously defined as high-impact programs. (Examples of these highimpact programs are Medicare, food stamps, and border inspections.) For example, one department included "provide speeches and articles for the Secretary and Deputy Secretary," among its essential functions, but did not include 9 of 10 high-impact programs for which it is responsible. Several factors contributed to these shortcomings: FPC 65 did not provide specific criteria for identifying essential functions; FEMA did not review the essential functions identified when it assessed COOP planning; and it did not conduct tests or exercises to confirm that the essential functions were correctly identified. Unless agencies' essential functions are correctly and completely identified, their COOP plans may not effectively ensure that the most vital government services can be maintained in an emergency. Although all but three of the agencies reviewed had developed and documented some of the elements of a viable COOP plan, none of the agencies could demonstrate that they were following all the guidance in FPC 65. There is a wide variation in the number of agencies that addressed various elements identified in the guidance. A contributing cause for the deficiencies in agency COOP plans is the level of FEMA oversight. In 1999, FEMA conducted an assessment of agency compliance with FPC 65, but it has not conducted oversight that is sufficiently regular and extensive to ensure that agencies correct the deficiencies identified. Because the resulting COOP plans do not include all the elements of a viable plan as defined by FPC 65, agency efforts to provide services during an emergency could be impaired.
In December 2014, we reported on the progress the departments that coordinate federal emergency support functions (ESF) have made in conducting a range of coordination, planning, and capability assessment activities. For example, all 10 ESF coordinators identified at least one nonemergency activity through which they coordinate with their ESFs’ primary and support agencies. Further, all 10 ESF coordinators identified at least one planning document—in addition to the information contained in the NRF’s ESF annexes—that they had developed for their ESFs to further define the roles, responsibilities, policies, and procedures for their ESFs’ coordination and execution. We found, however, that the ESF Leadership Group and FEMA, as the group’s chair, had not worked with other federal departments to issue supplemental guidance detailing expectations for the minimum standards for activities and product deliverables necessary to demonstrate ESF preparedness. In the absence of such guidance, we found that ESF coordinators are inconsistently carrying out their emergency response preparedness activities. We also found that, while federal departments have identified emergency response capability gaps through national- level exercises, real-world incidents, such as Hurricane Sandy and other assessments, the status of federal interagency implementation of these actions is not comprehensively collected by or reported to DHS or FEMA and, as a result, DHS’s and FEMA’s ability to assess and report on the nation’s overall preparedness is hampered. Further, we found that FEMA’s plan to lead interagency actions to identify and address capability gaps in the nation’s preparedness to respond to improvised nuclear device (IND) attacks did not contain detailed program management information—such as specific timeframes, milestones, and estimated resources required to close any given capability gap—which is needed to better enable ongoing management oversight of gap closure efforts. In our December 2014 report, we recommended that FEMA—in collaboration with other federal agencies—(1) issue supplemental guidance to ESF coordinators detailing minimum standards for activities and product deliverables necessary to demonstrate ESF preparedness, develop and (2) issue detailed program management information to better enable management oversight of the DHS IND Strategy’s recommended actions, and (3) regularly report on the status of corrective actions identified through prior national-level exercises and real-world disasters. DHS concurred with our recommendations and FEMA has taken actions in response. For example, in June 2015, FEMA issued guidance for ESF coordinators that details minimum standards for activities and product deliverables necessary to demonstrate ESF preparedness. The ESF Leadership Group established a set of preparedness performance metrics to guide ESF coordination, planning, and capabilities assessment efforts. The ESF Leadership Group-generated metrics set standardized performance targets and preparedness actions across the ESFs. Collectively, the metrics and reporting of these metrics should provide an opportunity to better measure preparedness efforts by assessing if ESF coordination and planning is sufficient, and whether required ESF capabilities are available for disaster response. In addition, FEMA developed a detailed program plan to provide a quantitative analysis of current work and addressing existing capability gaps linked to a project management tracking system to identify specific dates for past, present and upcoming milestones for its IND Program. We believe that FEMA’s actions in these areas have fully met the intent of these two recommendations. FEMA officials also collected information on the status of National Level Exercise Corrective Actions from 2007-2014, an important step to respond to our other recommendation and we are continuing to monitor FEMA’s efforts in this area, however it has not provided a timeframe for its completion. We recently reported in September 2015 on FEMA’s progress in working with its federal partners to implement the National Response Framework (NRF) Emergency Support Function #7 (ESF 7) Logistics Annex. We found that FEMA’s efforts reflect leading practices for interagency collaboration and enhance ESF 7 preparedness. For example, FEMA’s Logistics Management Directorate (LMD) has facilitated meetings and established interagency agreements with ESF 7 partners such as the Department of Defense and the General Services Administration, and identified needed quantities of disaster response commodities, such as food, water, and blankets. Additionally, FEMA tracks the percentage of disaster response commodities delivered by agreed-upon dates, and available through FEMA and its ESF 7 partners. Regarding FEMA’s support of its state and local stakeholders, we found that FEMA could strengthen the implementation of its Logistics Capability Assessment Tool (LCAT). For example, FEMA—through LMD and its regional offices—has made progress in offering training and exercises for state and local stakeholders, developing the LCAT, and establishing an implementation program to help state and local stakeholders use the tool to determine their readiness to respond to, disasters. However, we found that, while feedback from states that have used the LCAT has generally been positive, implementation of the program by FEMA’s regional offices has been inconsistent; 3 of 10 regional offices no longer promote or support LCAT assessments. Further, LMD did not identify staff resources needed to implement the program, and did not develop program goals, milestones, or measures to assess the effectiveness of implementation efforts. In our September 2015 report, we recommended that FEMA identify the LMD and regional resources needed to implement the LCAT, and establish and use goals, milestones and performance measures to report on the LCAT program implementation. DHS concurred with the recommendations and is taking actions to address them. For example, FEMA officials said they intend to work closely with regional staff to identify resources and develop a plan to monitor LCAT performance. We also reported on the status of FEMA’s development of the Logistics Supply Chain Management System (LSCMS) as part of a broader look at 22 acquisition programs at DHS, in April 2015. We reported that, according to FEMA officials, LSCMS can identify when a shipment leaves a warehouse and the location of a shipment after it reaches a FEMA staging area near a disaster location. At the time of our review, LSCMS could not track partner organizations’ shipments in route to a FEMA staging area, and lacked automated interfaces with its partners’ information systems. We also reported that DHS leadership had not yet approved a baseline establishing the program’s cost, schedule, and performance parameters. According to FEMA officials, FEMA’s partners and vendors can now receive orders directly from LSCMS and manually input their shipment data directly into a vendor portal, providing FEMA with the ability to track orders and shipments from time and date of shipment to the estimated time of arrival, but not the in-transit real-time location of shipments. They also said that the program baseline was still under consideration by DHS leadership at the time of our review. In addition, DHS’s Office of the Inspector General (OIG) issued a report on LSCMS in September 2014. The DHS OIG made 11 recommendations designed to address operational deficiencies that FEMA concurred with, such as identifying resources to ensure effective program management and developing a training program for staff. As of July 2015, FEMA officials report that 5 of the OIG’s recommendations have been implemented, and the agency is taking steps to address the remaining 6 recommendations. In addition to these completed reviews of preparedness efforts, we currently have work underway for this committee assessing how FEMA’s regional coordination efforts support national preparedness. Specifically, we plan to assess and report on FEMA’s management of preparedness grants, implementation of the National Incident Management System, and interactions with regional advisory councils later this year. In September 2012, we reported on FEMA’s processes for determining whether to recommend major disaster declarations. We found that FEMA primarily relied on a single criterion, the per capita damage indicator, to determine whether to recommend to the President that a jurisdiction receive Public Assistance (PA) funding. However, because FEMA’s current per capita indicator at the time of our report, set at $1 in 1986, did not reflect the rise in (1) per capita personal income since it was created in 1986 or (2) inflation from 1986 to 1999, the indicator was artificially low. Further, the per capita indicator did not accurately reflect a jurisdiction’s capability to respond to or recover from a disaster without federal assistance. We identified other measures of fiscal capacity, such as total taxable resources, that could be more useful in determining a jurisdiction’s ability to pay for damages to public structures. We also reported that FEMA can recommend increasing the usual proportion (75 percent) of costs the federal government pays (federal share) for PA (to 90 percent) when costs get to a certain level. However, FEMA had no specific criteria for assessing requests to raise the federal share for emergency work to 100 percent, but relied on its professional judgment. In our September 2012 report, we recommended, among other things, that FEMA develop a methodology to more accurately assess a jurisdiction’s capability to respond to and recover from a disaster without federal assistance, develop criteria for 100 percent cost adjustments, and implement goals for and monitor administrative costs. FEMA concurred with the first two recommendations, but partially concurred with the third, saying it would conduct a review before taking additional action. Since that time, FEMA has submitted a report to Congress outlining various options that the agency could take to assess a jurisdiction’s capability to respond to and recover from a disaster. We met with FEMA in April 2015 to discuss its report to Congress. FEMA officials told us that the agency would need to undertake the rulemaking process to implement a new methodology that provides a more comprehensive assessment of a jurisdiction’s capability to respond and recover from a disaster without federal assistance. They said that they identified three potential options, which taken individually or in some combination would implement our recommendation by (1) adjusting the PA per capita indicator to better reflect current national and state specific economic conditions; (2) developing an improved methodology for considering factors in addition to the PA per capita indicator; or (3) implementing a state-specific deductible for states to qualify for PA. Although FEMA initially concurred with our recommendation to develop criteria for 100 percent cost adjustments, it has concluded that it will not establish specific criteria or factors to use when evaluating requests for cost share adjustments. FEMA conducted a historical review of the circumstances that previously led to these cost share adjustments, and determined that each circumstance was unique in nature and could not be used to develop criteria or factors for future decision making. Based on FEMA’s review and its clarification of the intent to use cost share adjustments during only rare catastrophic events, we agreed that their decision could lead to better stewardship of federal dollars. In December 2014, we reported on FEMA’s progress in improving its ability to detect improper and potentially fraudulent payments. Specifically, while safeguards were generally not effective after Hurricanes Katrina and Rita, the controls FEMA implemented since then, designed to improve its capacity to verify applicants’ eligibility for assistance, have improved the agency’s ability to prevent improper or potentially fraudulent Individuals and Households Program (IHP) payments. We reported that as of August 2014, FEMA stated that it had provided over $1.4 billion in Hurricane Sandy assistance through its IHP—which provides financial awards for home repairs, rental assistance, and other needs—to almost 183,000 survivors. We identified $39 million or 2.7 percent of that total that was at risk of being improper or fraudulent compared to 10 to 22 percent of similar assistance provided for Hurricanes Katrina and Rita. However in December 2014, we identified continued challenges in the agency’s response to Hurricane Sandy, including weaknesses in the agency’s validation of Social Security numbers, among other things. Although FEMA hired contractors to inspect damaged homes to verify the identity and residency of applicants and that reported damage was a result of Hurricane Sandy, we found 2,610 recipients with potentially invalid identifying information who received $21 million of the $39 million we calculated as potentially improper or fraudulent. Our analysis included data from the Social Security Administration (SSA) that FEMA does not use, such as SSA’s most-complete death records. We also found that FEMA and state governments faced challenges in obtaining the data necessary to help prevent duplicative payments from overlapping sources. In addition, FEMA relied on self-reported data from applicants regarding private home insurance—a factor the agency uses in determining benefits, as federal law prohibits FEMA from providing assistance for damage covered by private insurance; however that data can be unreliable. In our December 2014 report, we recommended, among other things, that FEMA collaborate with SSA to obtain additional data, collect data to detect duplicative assistance, and implement an approach to verify whether recipients have private insurance. FEMA concurred with the report’s five recommendations and has taken actions to address them. For example, in response to our recommendations, FEMA started working with SSA to determine the feasibility and cost effectiveness of incorporating SSA’s identify verification tools and full death file data into its registration process, and expects to make its determination by the end of 2015. FEMA indicated that, depending on the determination, one option would be to enter into a Computer Matching Agreement with SSA. FEMA has also approved plans to improve the standardization, quality and accessibility of data across its own disaster assistance programs, which includes efforts to enhance data sharing with state and local partners, that should allow it to more readily identify potentially duplicative assistance. Also, after reviewing various options, FEMA has decided to add an additional question to its application to help confirm self-reported information on whether applicants have private insurance. We are reviewing these actions to determine if they reflect sufficient steps to consider our recommendations fully implemented. In July 2015 we reported that during the Hurricane Sandy Recovery, five federal programs—the FEMA’s Public Assistance (PA) and Hazard Mitigation Grant Program (HMGP), the Federal Transit Administration’s Public Transportation Emergency Relief Program, the Department of Housing and Urban Development’s Community Development Block Grant-Disaster Recovery, and the U.S. Army Corps of Engineers’ Hurricane Sandy program—helped enhance disaster resilience—the ability to prepare and plan for, absorb, recover from, and more successfully adapt to disasters. We found that, these programs funded a number of disaster-resilience measures, for example, acquiring and demolishing at-risk properties, elevating flood-prone structures, and erecting physical flood barriers. State and local officials from all 12 states, the District of Columbia, and New York City in the Sandy affected-region reported that they were able to effectively leverage federal programs to enhance disaster resilience, but also experienced challenges. The challenges included implementation challenges within PA and HMGP, limitations on comprehensive risk reduction approaches in a post disaster environment, and local ability and willingness to participate in mitigation activities. We found there was no comprehensive, strategic approach to identifying, prioritizing and implementing investments for disaster resilience, which increased the risk that the federal government and nonfederal partners will experience lower returns on investments or lost opportunities to strengthen key critical infrastructure and lifelines. Most federal funding for hazard mitigation is available after a disaster and there are benefits to investing in resilience post disaster. Individuals and communities affected by a disaster may be more likely to invest their own resources while recovering. However, we concluded that the emphasis on the post-disaster environment can create a reactionary and fragmented approach where disasters determine when and for what purpose the federal government invests in disaster resilience. In our July 2015 report, we recommended that (1) FEMA assess the challenges state and local officials report and implement corrective actions as needed and (2) the Mitigation Framework Leadership Group (MitFLG) establish an investment strategy to identify, prioritize, and implement federal investments in disaster resilience. DHS agreed with both recommendations. With respect to the challenges reported by state and local officials, FEMA officials said it would seek input from federal, tribal, state, and local stakeholders as part of its efforts to reengineer the PA program, which it believes will address many of the issues raised in the report. In addition, DHS said that FEMA, though its leadership role in the MitFLG would take action to complete an investment strategy by August 2017. We currently have work underway for this committee assessing several of FEMA’s disaster response and recovery programs. For example, we are reviewing FEMA’s urban search and rescue program, incident management assistance teams, and evacuation planning, as well as national disaster assistance programs for children and special needs populations. In addition, we are reviewing DHS’s national emergency communications programs and efforts to implement the National Disaster Recovery Framework. In December 2014, we reported on FEMA’s progress in taking steps to reduce and better control administrative costs—the costs of providing and managing disaster assistance. For example, FEMA issued guidelines intended to better control its administrative costs in November 2010. In addition, FEMA recognized that administrative costs have increased and it has taken steps such as setting a goal in its recent strategic plan to lower these costs, and creating administrative cost targets. Specifically, FEMA established a goal in its Strategic Plan for 2014-2018 to reduce its average annual percentage of administrative costs, as compared with total program costs, by 5 percentage points by the end of 2018. To achieve this goal, FEMA officials developed administrative costs goals for small, medium, and large disasters, and are monitoring performance against the goals. However, FEMA does not require these targets be met, and we found that had FEMA met its targets, administrative costs could have been reduced by hundreds of millions of dollars. We found that FEMA continued to face challenges in tracking and reducing these costs. FEMA’s average administrative cost percentages for major disasters during the 10 fiscal years 2004 to 2013 was double the average during the 10 fiscal years 1989 to 1998. Further, we found that FEMA did not track administrative costs by major disaster program, such as Individual or Public Assistance, and had not assessed the costs versus the benefits of tracking such information. In our December 2014 report, we recommended that FEMA (1) develop an integrated plan to better control and reduce its administrative costs for major disasters, (2) assess the costs versus the benefits of tracking FEMA administrative costs by the Disaster Relief Fund program, and (3) clarify the agency’s guidance and minimum documentation requirements for direct administrative costs. FEMA agreed with the report and its recommendations. As of August 2015, FEMA told us it is developing an integrated plan to control and reduce administrative costs for major disaster declarations. According to FEMA officials, their Disaster Administrative Cost Integrated Project Team has been working over the past several months to analyze FEMA’s historic administrative costs, identify cost drivers, document and evaluate the delivery of disaster assistance, and set an improved framework to standardize the way FEMA does business. FEMA officials previously told us that the plan will describe the steps the agency plans take to reduce administrative costs, milestones for accomplishing the reduction, and clear roles and responsibilities, including the assignment of senior officials/offices responsible for monitoring and measuring performance. FEMA also continues to assess the costs versus the benefits of tracking administrative costs by program. According to FEMA officials, this project requires connecting multiple disparate data sources. FEMA has identified some, but not all of the data which needs to be integrated in order to be able to track administrative costs by program area. FEMA is also evaluating its direct administrative costs pilot program, which applies a standard fixed percentage towards administrative costs. According to FEMA, if successful, results from this program could inform the development of additional guidance or regulatory modification and similar approaches could be applied in future disasters. For current and other past disasters, FEMA told us it plans to provide clarifying guidance. According to FEMA, this information will be incorporated into the Public Assistance unified guidance document that is scheduled to be issued in January 2016. In July 2015, we reported on FEMA’s progress in taking steps to address various long-standing workforce management challenges in completing and integrating its strategic workforce planning efforts we have identified since 2007. We found that FEMA had not yet resolved these challenges and fully addressed our prior workforce-related recommendations. However, according to agency officials, they plan to do so through efforts to develop (1) a new incident workforce planning model—pending final approval—that will determine the optimal mix of workforce components to include in FEMA’s disaster workforce, (2) a new Human Capital Strategic Plan that was to have been finalized in September 2015—that will help ensure it has the optimal workforce to carry out its mission, and (3) an executive-level steering committee to help ensure that these workforce planning efforts are completed and integrated. In addition, we discussed FEMA’s continuing, long-standing challenges in implementing an employee credentialing system and addressing employee morale issues. We also reported that FEMA faces challenges in implementing and managing its two new workforce components, the Surge Capacity Force and the FEMA Corps. (The Surge Capacity Force consists of employees of DHS components who volunteer to deploy to provide support to FEMA in the event of a disaster. The FEMA Corps are temporary national service participants of the National Civilian Community Corps who complete FEMA service projects to complement its disaster-related efforts.) For example, as of January 2015, the Surge Capacity Force was at 26 percent of its staffing target of 15,400 personnel, and FEMA did not have a plan for how it will increase the number of volunteers to meet its goals. We also found that FEMA did not collect full cost information, including the costs of FEMA Corps background investigations and the costs of the salaries and benefits of Surge Capacity Force volunteers who are paid by DHS components while they are deployed. Further, we concluded that FEMA did not assess all aspects of program performance because it does not have performance measures that correspond to all program goals and that doing so would better enable FEMA to assess whether it was meeting its program goals. In our July 2015 report, we recommended, among other things, that FEMA develop a plan to increase Surge Capacity Force volunteer recruitment and collect additional cost and performance information for its new workforce components. DHS concurred with the five recommendations in the report and identified related actions the department is taking to address them, primarily focusing on FEMA’s plans to issue a new strategic workforce plan. However, FEMA has not met its September milestone for issuing the plan, but told us it expects to issue the plan on October 30, 2015. We reported in September 2015 on FEMA’s progress in building and managing its contracting workforce and structure to support disasters since enactment of the Post-Katrina Act. We found that the size of FEMA’s contracting officer workforce at the end of fiscal year 2014 was more than triple the size of its workforce at the time of Hurricane Katrina, growing from a total of 45 contracting officers in 2005 to 163 contracting officers at the end of fiscal year 2014. FEMA’s workforce increases are due in part to the creation of a headquarters staff in 2010 charged with supporting disasters, known as the Disaster Acquisition Response Team (DART). DART has gradually assumed responsibility for administering the majority of FEMA’s disaster contract spending, but FEMA does not have a process for how the team will prioritize its work when they are deployed during a busy disaster period. During this period of growth in the size of its contracting officer workforce, FEMA has struggled with attrition at times. We found this turnover in FEMA’s contracting officer workforce has had particular impact on smaller regional offices which, with only one or two contracting officers, face gaps in continuity. Further, we found that FEMA’s 2011 agreement that establishes headquarters and regional responsibilities in overseeing regional contracting staff poses challenges for FEMA to cohesively manage its contracting workforce. For example, regional contracting officers have a dual reporting chain to both regional supervisors and headquarters supervisors, which heightens the potential for competing interests for the regional contracting officers. Furthermore, FEMA has not updated the agreement to incorporate lessons learned since creating DART, even though the agreement states it will be revisited each year. We also found that FEMA has not fully implemented the four Post-Katrina Act contracting requirements we examined, due in part to incomplete guidance and that inconsistent contract management practices during disaster deployments—such as incomplete contract files and reviews—create oversight challenges. In our September 2015 report, we made eight recommendations to the FEMA Administrator and one recommendation to DHS to help ensure FEMA is prepared to manage the contract administration and oversight requirements of several simultaneous large-scale disasters or a catastrophic event, to improve coordination and communication between headquarters and regional offices with respect to managing and overseeing regional contracting officers, and to improve the implementation of contracting provisions under the Post-Katrina Act. DHS concurred with our recommendations and identified steps FEMA plans to take to address them within the next year. Specifically, FEMA plans to update relevant guidance and policies related to headquarters and regional office roles and responsibilities for managing regional contracting officers and disaster contracting requirements. We currently have work underway for this committee assessing additional FEMA management areas, including assessing FEMA’s management of information technology systems that support disaster response and recovery programs. We plan to report on that work early next year. Chairman McSally, Ranking Member Payne and members of the subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you or your staff members have any questions about this testimony, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Christopher Keisling, Assistant Director; Aditi Archer, Tracey King, and David Alexander made contributions to this testimony. Emergency Management: FEMA Collaborates Effectively with Logistics Partners but Could Strengthen Implementation of Its Capabilities Assessment Tool. GAO-15-781. Washington, D.C.: September 10, 2015. Emergency Preparedness: Opportunities Exist to Strengthen Interagency Assessments and Accountability for Closing Capability Gaps. GAO-15-20. Washington, D.C.: December 4, 2014. National Preparedness: Actions Taken by FEMA to Implement Select Provisions of the Post-Katrina Emergency Management Reform Act of 2006. GAO-14-99R. Washington, D.C.: November 26, 2013. National Preparedness: FEMA Has Made Progress in Improving Grant Management and Assessing Capabilities, but Challenges Remain. GAO-13-456T. Washington, D.C.: March 19, 2013. Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014. National Preparedness: FEMA Has Made Progress, but Additional Steps Are Needed to Improve Grant Management and Assess Capabilities. GAO-13-637T. Washington, D.C.: June 25, 2013. Managing Preparedness Grants and Assessing National Capabilities: Continuing Challenges Impede FEMA’s Progress. GAO-12-526T. Washington, D.C.: March 20, 2012. FEMA Has Made Limited Progress in Efforts to Develop and Implement a System to Assess National Preparedness Capabilities. GAO-11-51R. Washington, D.C.: October 29, 2010. Emergency Preparedness: FEMA Faces Challenges Integrating Community Preparedness Programs into Its Strategic Approach. GAO-10-193. Washington, D.C.: January 29, 2010. National Preparedness: FEMA Has Made Progress, but Needs to Complete and Integrate Planning, Exercise, and Assessment Efforts. GAO-09-369. Washington, D.C.: April 30, 2009. Hurricane Sandy: An Investment Strategy Could Help the Federal Government Enhance National Resilience for Future Disasters. GAO-15-515. Washington, D.C.: July 30, 2015. Budgeting for Disasters: Approaches for Budgeting for Disasters in Selected States. GAO-15-424. Washington, D.C.: March 26, 2015. Hurricane Sandy: FEMA Has Improved Disaster Aid Verification but Could Act to Further Limit Improper Assistance. GAO-15-15. Washington, D.C.: December 12, 2014. Disaster Resilience: Actions Are Underway, but Federal Fiscal Exposure Highlights the Need for Continued Attention to Longstanding Challenges. GAO-14-603T. May 14, 2014. Federal Disaster Assistance: Improved Criteria Needed to Assess a Jurisdiction’s Capability to Respond and Recover on Its Own. GAO-12-838. Washington, D.C.: September 12, 2012. Disaster Recovery: FEMA’s Long-term Assistance Was Helpful to State and Local Governments but Had Some Limitations. GAO-10-404. Washington, D.C.: March 30, 2010. Disaster Housing: FEMA Needs More Detailed Guidance and Performance Measures to Help Ensure Effective Assistance after Major Disasters, GAO-09-796. August 28, 2009. Hurricanes Gustav and Ike Disaster Assistance: FEMA Strengthened Its Fraud Prevention Controls, but Customer Service Needs Improvement. Washington, D.C.: GAO-09-671. June 19, 2009. Disaster Recovery: FEMA’s Public Assistance Grant Program Experienced Challenges with Gulf Coast Rebuilding. GAO-09-129. Washington, D.C.: December 18, 2008. Federal Emergency Management Agency: Additional Planning and Data Collection Could Help Improve Workforce Management Efforts. GAO-15-437. Washington, D.C.: July 9, 2015. Federal Emergency Management Agency: Opportunities Exist to Strengthen Oversight of Administrative Costs for Major Disasters, GAO-15-65. Washington, D.C.: December 17, 2014. Federal Emergency Management Agency: Opportunities to Achieve Efficiencies and Strengthen Operations. GAO-14-687T. Washington, D.C.: July 24, 2014. High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013. FEMA Reservists: Training Could Benefit from Examination of Practices at Other Agencies. GAO-13-250R. Washington, D.C.: March 22, 2013. Disaster Assistance Workforce: FEMA Could Enhance Human Capital Management and Training. GAO-12-538. Washington, D.C.: May 25, 2012. Federal Emergency Management Agency: Workforce Planning and Training Could Be Enhanced by Incorporating Strategic Management Principles. GAO-12-487. Washington, D.C.: April 26, 2012. FEMA Has Made Progress in Managing Regionalization of Preparedness Grants. GAO-11-732R. Washington, D.C.: July 29, 2011. Government Operations: Actions Taken to Implement the Post-Katrina Emergency Management Reform Act of 2006. GAO-09-59R. Washington, D.C.: November 21, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
A little more than 10 years ago, Hurricane Katrina caused an estimated $108 billion in damage, making it the largest, most destructive natural disaster in our nation's history. Following the federal response to Hurricane Katrina in 2005, Congress passed the Post-Katrina Emergency Management Reform Act of 2006 (Post-Katrina Act). The act contained over 300 provisions that are intended to enhance national preparedness, emergency response and recovery, and the management of select disaster programs. In October 2012, another catastrophic hurricane—Hurricane Sandy—caused $65 billion in damage and once again tested the nation's preparedness and emergency response and recovery functions. GAO has issued multiple reports that discuss a wide variety of emergency management issues reflecting the federal government and FEMA's efforts to implement provisions of the Post-Katrina Act and address various aspects of emergency management. This statement discusses GAO's work on the progress FEMA has made and challenges that it still faces in three areas: (1) national preparedness, (2) disaster response and recovery, and (3) selected FEMA management areas. This statement is based on previously issued GAO reports from 2012 to 2015. GAO's recent work highlights both the progress and challenges in the Federal Emergency Management Agency's (FEMA) efforts to lead national preparedness efforts, particularly efforts to assess emergency support capabilities and enhance logistics capabilities. Assessing capabilities is critical to ensure that they will be available when needed in emergencies. For example, GAO found in December 2014 that federal departments have identified emergency response capability gaps through national-level exercises and real-world incidents, but the status of agency actions to address these gaps is not collected by or reported to Department of Homeland Security or FEMA. GAO recommended that FEMA—in collaboration with other federal agencies—regularly report on the status of corrective actions. FEMA agreed with GAO's recommendation and is taking action to address it but has not established a timeframe for completion. GAO's recent work on disaster response and recovery programs also identified progress and challenges in a number of areas. From fiscal years 2004 through 2013, FEMA obligated over $95 billion in federal disaster assistance for 650 major disasters declared during this timeframe. With the growing cost of disasters it is vital for the federal government to address its fiscal exposure and ensure that response and recovery programs are as efficient and effective as possible. For example, in December 2014, GAO found that FEMA demonstrated progress controlling for potentially fraudulent payments to individuals during Hurricane Sandy as compared to Hurricanes Katrina and Rita. However, GAO reported continued challenges, including weaknesses in validation of Social Security numbers and made recommendations to strengthen these processes. Further, in July 2015, GAO reported that states and localities affected by Hurricane Sandy were able to effectively leverage federal programs to enhance resilience during their recovery. However, states experienced continued challenges in implementing certain FEMA recovery programs, such as Public Assistance. GAO also found that there was no comprehensive, strategic approach to identifying, prioritizing, and implementing investments for disaster resilience. GAO made recommendations to address these continued challenges and FEMA is taking a range of actions to address them. FEMA has also taken steps to strengthen a number of its management areas, but GAO reported that additional progress is needed in several areas. Specifically, In December 2014, GAO found that FEMA had taken steps to control its administrative costs—the costs of providing and managing disaster assistance—by issuing guidelines and reduction targets. However, GAO reported that FEMA does not require the targets to be met and continued to face challenges tracking the costs. Among other things, GAO recommended that FEMA develop an integrated plan to better control and reduce its administrative costs for major disasters. Further, in July 2015 GAO reported that FEMA had taken action to address various long-standing workforce management challenges, but faced multiple challenges, including implementing and managing its temporary workforces and completing strategic workforce planning efforts. FEMA agreed with GAO's recommendations and is taking action to address them. GAO has made numerous recommendations in its prior reports to FEMA designed to address the challenges discussed in this statement. FEMA has taken actions to address many of these recommendations.
The federal acquisition process involves a number of steps that are common to all government agencies such as solicitation, evaluation, and contract award. Agencies are increasingly leveraging electronic data systems to streamline acquisitions and reduce costs. According to GSA officials, as these systems gained greater use within the government, some agencies developed their own unique data systems to support acquisition activities. These systems served specific roles in the acquisition process, such as contractor registration or performance tracking. There was little coordination in data systems across the government. Agencies created their own systems based on different standards which meant that information could not be readily shared. These stove-piped systems resulted in higher costs to the government; created inefficiencies; and made it confusing for government workers, vendors, and the public to use the systems. IAE was initiated to integrate, standardize, and streamline some of the many different acquisition data systems used throughout the government. The program was charged with identifying how information systems could be used to integrate the acquisition functions common to different agencies and to implement governmentwide data systems. Common acquisition functions include, for example, posting contract opportunities, registering contractors who are interested in doing business with the government, assessing contractor past performance, and tracking and reporting contract actions. Bringing disparate data systems together and providing a shared services resource to enter and retrieve acquisition information should help to eliminate unnecessary and repetitive steps in the acquisition process and reduce information technology costs. When IAE began, OMB directed GSA to execute and manage the initiative. GSA officials said that they worked with other government agencies that would use IAE’s systems and established a collaborative governance structure that would allow agency users to set the initiative’s priorities and budget. The Acquisition Committee for E-Gov (ACE), a subcommittee of the Chief Acquisition Officer’s Council, provides overall governance for IAE. The ACE has several responsibilities, including providing strategic direction for IAE, approving IAE’s annual budget and work plan, and ensuring IAE investments align with E-Gov business goals. The ACE is currently co-chaired by representatives from the Departments of Defense and Interior. IAE has developed in two stages using different acquisition strategies. Initially, GSA focused on establishing a portfolio of standardized governmentwide systems through an acquisition strategy known as “adopt, adapt, acquire.” Using this strategy, GSA adopted or adapted existing agency-specific systems for governmentwide use. If there was no viable system that could be adapted or adopted to meet an identified need, GSA acquired a new system. GSA also established an IAE funding strategy that consisted of contributions from agencies that use IAE systems. In 2008, to further eliminate redundancy, reduce costs, and improve efficiency, GSA began consolidating its portfolio of systems into one integrated system called the System for Award Management (SAM). Unlike the existing systems (sometimes called “legacy” systems) in which a single contractor designed, developed, and operated each of them, IAE relies on multiple vendors to perform these same tasks for SAM. The intent of this approach is to enhance competition and innovation and for the government to own the software associated with the system. SAM will be developed in phases. In each phase, capabilities from selected IAE systems will be added to SAM and those legacy systems will then be shut down. During the first IAE stage, GSA worked to create a portfolio of governmentwide systems through an acquisition strategy known as “adopt, adapt, acquire.” GSA and OMB officials surveyed various government stakeholders to develop an inventory of existing data systems and to identify additional data-related needs of the government. Using this information, the ACE directed GSA to adopt or adapt existing agency-specific systems for governmentwide use. For example, the Central Contractor Registration (CCR) database, where contractors register certain business information prior to being considered for contract awards, was a Department of Defense (DOD) system that IAE adopted for governmentwide use in 2003. GSA officials believed DOD’s system met the government’s requirements, and adopting it was a better alternative than developing a new system. The Federal Procurement Data System – Next Generation (FPDS-NG) is an example of a system that IAE adapted. FPDS, the FPDS-NG predecessor, was initially implemented in 1978 and in 2003 GSA hired a vendor to modernize the system. When no existing systems could be adopted or adapted for governmentwide use, IAE’s strategy was to acquire new systems from software developers. For example, GSA contracted with IBM in 2004 to develop and operate the Online Representations and Certifications Application (ORCA) database, for firms to submit certifications on matters such as firm size and ownership status. Table 1 identifies the portfolio of systems that were included in the first stage of IAE up through 2008 and whether each system was adopted, adapted, or acquired. Shortly after IAE was established, GSA and the ACE created a funding structure in which agencies contribute to the program based on their level of contracting activity. GSA negotiated memorandums of understanding (MOU) with the 24 departments and agencies covered by the Chief Financial Officers Act to collect funding contributions, which pay for the development, operations, and maintenance of IAE’s portfolio. When developing its annual budget, GSA estimates what the cost of operating IAE will be and then determines each agency’s contribution based on its contracting activity (number and value of contracts) the prior year. For example, DOD is the largest agency in terms of the number and value of contracts awarded and therefore contributes the most, 65 percent of the total. The Federal Funding Accountability and Transparency Act of 2006 (Transparency Act) created new reporting requirements for federal loan and grant recipients that increased the use of certain IAE systems. For example, to comply with the Transparency Act, OMB required grant recipients to register in CCR. In 2008, GSA negotiated separate MOUs with 22 departments and agencies for additional contributions to fund the higher costs associated with providing greater support to grant and loan recipients. Overall, since 2002, about $396 million has been allocated to IAE, as shown in table 2. Another key component that supports the IAE systems is GSA’s contract with Dun & Bradstreet for the use of the Data Universal Numbering System (DUNS) and other services to verify and standardize information on contract, grant, and loan recipients. GSA uses the DUNS numbers as unique identifiers for organizing and tracking these entities, including making linkages between parent and subsidiary businesses, within and across the IAE systems. The federal government has used DUNS numbers since 1978, and the Federal Acquisition Regulation (FAR) has required all prospective government contractors to obtain DUNS numbers since 1998. Since 2003, OMB has also required prospective grant recipients to obtain DUNS numbers. IAE’s contract with Dun & Bradstreet also supports the use of DUNS numbers for other government- wide information systems, such as USASpending.gov. GSA’s current contract with Dun & Bradstreet was awarded in 2010 and is valued at over $135 million for up to 8 years. The Dun & Bradstreet contract is the largest IAE contract. In December 2008, the ACE approved a proposal to aggregate the IAE data systems into a new System for Award Management (SAM). GSA officials said that while the existing IAE systems had provided benefits, additional efficiencies could be achieved. For example, the systems contained overlapping data, had separate sign-on procedures, and each system had different hardware, software, and helpdesks. Consolidating the IAE portfolio was intended to reduce costs by eliminating redundancy, streamlining acquisition processes, and consolidating infrastructure. GSA is relying on an acquisition strategy to develop SAM that is different from what it has used in the past when GSA turned to a single contractor to develop, operate, and support each IAE system. SAM will be split into multiple components with separate contractors responsible for (1) system design and operations, (2) software development, (3) hosting services, and (4) help desk support (see table 3). The new approach to developing SAM is intended to address lessons learned from past IAE systems. Unlike the legacy systems, the government will own the SAM software as open-source code, the system architecture, and all supporting hardware. IAE officials believe that an open-source approach to software and development will result in lower costs to the government because IAE will be able to avoid sole-source modifications to the system and competitively award future enhancement contracts. GSA officials said that in the past, system enhancements were expensive, in part because the incumbent contractors knew that GSA’s only alternative to a sole-source enhancement contract was to develop a new system. Also, GSA officials said the plan to consolidate help desk services into one single contractor is an effective way to control cost and service levels. With SAM, the system design contractor (IBM) will be responsible for developing the system architecture, defining technical requirements, specifying data migration procedures for each of the legacy systems, and operating and maintaining SAM. Once IBM has specified the technical requirements and data migration procedures for the legacy systems, a second contractor will be responsible for writing the software code that will make up SAM. GCE will write the code for the first phase and GSA will competitively award software contracts for the subsequent phases. IBM will then test and validate the software, implement the system migration, and begin operating and maintaining the new SAM system. A third contractor (Qwest) will provide hosting services, which involves providing a secure facility to physically house SAM and power and Internet connectivity. GSA will provide the hardware (hard drives, servers, and other equipment) and software (operating system, databases, and other software licenses) for the hosting facility. Finally, a fourth contractor (HP) will be responsible for providing a consolidated help desk to support SAM users. GSA initially planned to migrate IAE systems to SAM in four phases based on groups of legacy IAE systems (see table 4). GSA and ACE officials viewed a phased approach as having less risk than replacing all the legacy systems at the same time. The timing of each phase was generally established to coincide with expiring legacy system contracts. As each phase is completed, the capabilities of the systems in that phase will be added to SAM and the legacy systems will be shut down. As discussed above, each development phase requires contributions from the four SAM contractors, with GSA managing the various contracts. GSA officials anticipate that additional systems may be added to SAM in the future. For example, the contract with IBM includes an option to migrate the Past Performance Information Retrieval System (PPIRS), which provides access to past performance information on contractors, to SAM. In addition, FedReg has been merged with the Central Contractor Registration (CCR) and will be included in phase 1. GSA and its contractors have made progress in developing SAM and phase 1 is scheduled to be completed in May 2012. GSA also has worked with contractors to establish a hosting center for the system and a help desk to support users. However, since 2009, the costs of developing SAM have grown significantly. The higher development costs were primarily due to the failure to adequately execute the SAM hosting strategy as initially planned. To a lesser extent, external factors, including recent statutory requirements and policy changes, have contributed to higher operational costs as well by increasing the demand for help desk services. While IAE costs were increasing, the program also experienced a significant funding shortage in fiscal years 2011 and 2012. In response to rising costs and resource constraints, GSA officials have delayed SAM’s development schedule and taken other actions to reduce or defer other costs. GSA has made progress in consolidating the IAE systems. Specifically, phase 1 of SAM is nearly complete and GSA has created a consolidated hosting environment and established a single help desk called the Federal Service Desk (FSD). GSA and its contractor, IBM, have completed the overarching design of SAM as well as the phase 1 technical requirements. GSA officials also report that the agency has purchased the hosting hardware and software needed for phase 1 and IBM is preparing to make the hosting facility operational in time to launch phase 1. Phase 1 is scheduled to go live in May 2012 and will replace three IAE systems—CCR, ORCA, and EPLS. Officials report that the phase 1 software developer is currently working closely with IBM to coordinate the testing and validation of the phase 1 software. IBM has begun developing the phase 2 requirements and GSA is in the process of competing the phase 2 software development contract. Phase 3 efforts have not yet begun. GSA officials told us FSD currently provides help desk services for most of the IAE data systems. Help desk responsibility for three systems remains with their legacy vendors. GSA officials expect help desk services for these remaining systems to transition to FSD as they become part of SAM. Costs of the various SAM components have increased significantly over the past 3 years. GSA did not develop a formal cost baseline when SAM development was started, so we compared the initial contract value for GSA each of the SAM components to the current contract estimates.currently estimates that the various SAM-related contracts will cost $181.1 million. This represents an increase of $85 million, nearly 90 percent, over the initial contract award amounts which totaled about $96 million (see fig. 1). Most of the cost growth, about $65 million, is due to higher than expected hosting costs. Hosting consists of a secure facility with Internet connectivity; the hardware on which the system will be installed; the operating system and other software necessary to operate the code that will make up SAM; and the operation and maintenance of the hosting environment. GSA estimated in 2008 that hosting costs for the IAE systems were $2.8 million and that annual costs would be much less than that after moving to a single hosting environment. However, we estimate that SAM hosting costs will average $8 million to $9 million per year. The higher costs are largely due to GSA omitting key components from its contracting strategy for acquiring hosting services. GSA’s initial strategy was to contract with a single company for all of these hosting services. However, shortly after beginning SAM development, IAE awarded a contract (to Qwest) for a more limited set of hosting services that only included the hosting facility and Internet connectivity.GSA did not include the hosting hardware, software, and operation and maintenance services that were needed. Program officials told us that at the time they believed that the multiagency telecommunications contract used to obtain hosting services from Qwest did not offer the comprehensive services that were needed. Officials also said they thought IBM was responsible for providing these items, but later realized that was not the case. GSA decided to purchase the hosting hardware and software itself under existing GSA schedule contracts at an estimated cost of $29 million. After negotiations with IBM, GSA modified IBM’s contract in June 2011, adding $36 million to the $74 million contract price to have IBM install and operate and maintain the hosting hardware and software in Qwest’s facility. It took GSA more than a year to finalize its current hosting approach and program officials said they have purchased hosting hardware and software through 13 different contracts instead of just 1 contract as intended under their original hosting strategy. It is not clear why The help desk function, FSD, also experienced cost growth over its first years of operation as the expected cost has nearly doubled to $33 million. Most of this growth appears to have resulted from factors outside of GSA’s control. GSA officials told us the FSD contract price is driven by the amount of support activity provided under the contract. The higher costs reflect a greater than expected demand associated primarily with one data system, CCR. This system serves as a registry for any organization that wants to do business with the federal government. Several events occurred that substantially increased the number of CCR help desk calls. Both the Federal Funding Accountability and Transparency Act of 2006 (Transparency Act) and the American Recovery and Reinvestment Act of 2009 (Recovery Act) included provisions that increased or had the effect of increasing the number of CCR registrants. Specifically, the Transparency Act contained requirements for a single searchable website with data on federal loan, grant, and contract recipients, which prompted the government to require grant recipients to register in CCR. The Recovery Act temporarily increased the number of loan and grant recipients, which also led to greater numbers of CCR registrants. Also, in late 2008, the CCR login process changed in response to actions taken by DOD to improve password security measures. As a result of these changes, there was a drastic rise in help desk activity from CCR customers. While SAM costs were beginning to increase, the program also did not receive funding increases it requested. Up through fiscal year 2010, IAE was primarily funded through agency contributions. When the ACE approved SAM development in 2008, program officials believed that agency contributions would be sufficient to cover the development costs. However, GSA had underestimated its funding needs and soon after the start of SAM development, GSA officials recognized that the amount of agency contributions was insufficient to pay to operate the existing IAE systems and develop SAM over the next several years. GSA officials told us they consulted with OMB and considered various funding options to pay for the development of SAM, including increased agency contributions, a separate appropriation request, or user fees. Ultimately, with OMB’s support, GSA decided to seek additional funding through an appropriation and requested $15 million for fiscal year 2011. The program GSA also requested a $38 received $7 million of the requested amount.million appropriation in fiscal year 2012 for SAM, but did not receive any appropriations from Congress for the year. GSA has made a $21 million appropriation request for fiscal year 2013. GSA officials responded to rising costs and limited resources by modifying and delaying the SAM schedule, and deferring payments or reducing contract requirements where possible. One of the most significant changes was GSA’s decision to not transition FPDS-NG to the SAM contract as an interim step prior to FPDS-NG being fully integrated into SAM. Under the SAM design contract, FPDS-NG was scheduled to be transitioned in June 2010 from the FPDS-NG legacy contractor to IBM. IBM then would have been responsible for operating and maintaining FPDS-NG “as-is” under the SAM contract and GSA could have ended the FPDS-NG legacy contract. GSA officials said that the transition did not occur because they had neglected to account for the hardware and software required to host FPDS-NG in the new SAM hosting facility and could not afford to buy these components. Instead, GSA awarded a follow-on contract to the FPDS-NG legacy contractor that cost an unanticipated $5.4 million in fiscal year 2011 and is expected to cost another $3.8 million in fiscal year 2012 and a similar amount annually through 2015. GSA also delayed the schedule for moving the other IAE systems to SAM. In 2010, GSA expected to complete all of the development phases of SAM in early 2014, but under the current schedule the final phase will be completed in 2015, 20 months later than planned (see fig. 2). There was a 5-month delay in implementing phase 1. Delays for phases 2 and 3 are much longer, and GSA officials cited the higher costs for hosting services as the main reason for delaying the phases. Phase 2 has While the systems included in also recently been split into subphases.phase 2a have been delayed for several months, GSA officials said they can complete this phase with available resources because the systems in phase 2a will not require a significant investment in hosting hardware and software. Phase 2b will not be completed until mid-2014, approximately 2 years later than originally planned, in part because GSA cannot afford the estimated $21 million necessary to complete the phase. Furthermore, the migration of FPDS-NG is not scheduled to be completed until 20 months later than planned, in 2015. In addition to delaying SAM’s development schedule, GSA officials said they have taken other steps to defer or reduce costs. In 2011, GSA modified the payment schedule for the Dun & Bradstreet DUNS contract to delay payments from fiscal year 2011 to fiscal year 2012. Under the original contract, GSA was scheduled to make an $18 million payment in August 2011. To free up funds in fiscal year 2011, GSA negotiated a modification with Dun & Bradstreet that allowed GSA to pay only $3.8 million in 2011 and deferred the remaining payments to later years. In addition, GSA cut FSD costs by reducing the required level of services stated in the contract. For instance, GSA capped the number of calls the contractor needs to respond to every month, which program officials said has reduced costs and made it easier to estimate future costs. However, the cap on calls may reduce the responsiveness of the help desk to users. GSA officials said they also stopped making investments in the legacy systems and stopped making all but the most minor of changes to the systems. For example, GSA officials said they would fix hyperlinks on the system websites and make other small corrections, but would avoid making larger changes to the legacy systems unless absolutely necessary. Schedule delays and other GSA actions taken in response to cost growth and funding shortages are likely to lead to further cost increases that pose a risk to IAE. Delaying the SAM schedule will require GSA to continue operating the legacy IAE systems, in some cases for years longer than originally expected. At the same time, GSA must contend with higher hosting and help desk costs that will extend over several more years. While GSA has taken some steps to reduce these costs, it has not reevaluated whether its current acquisition strategy, including its approach to acquire hosting services, is still the most cost-effective approach to implement SAM. In addition, although the SAM development phases have been pushed out several years, GSA has not modified its development contract with IBM to reflect these changes. The program continues to pay the same fixed-price amount to the contractor for system development activities as well as operation and maintenance of SAM, even though there was little to operate and maintain for the first 2 years of development. GSA delayed the SAM development schedule in response to cost growth and reduced funding, but those delays will result in additional cost increases as GSA has to extend the life of the legacy systems. For example, the decision to not transition FPDS-NG to the IBM contract as originally planned could increase GSA’s costs by approximately $16 million as GSA will have to continue operating FPDS-NG for 5 years longer than expected. Similarly, the 2-year delay in migrating FedBizOpps into SAM means that GSA will have to spend $2.8 million on FedBizOpps in fiscal year 2012. Assuming costs remain the same, continuing to operate FedBizOpps for 2 additional years will increase costs by approximately $5.6 million. Schedule delays may also increase FSD costs. GSA officials said that the majority of the help desk calls are associated with CCR whose migration has been delayed 5 months. In addition to paying the CCR legacy vendor to continue operating CCR for 5 additional months, GSA will also have to pay the FSD vendor to continue supporting CCR. GSA is also grappling with higher SAM development costs, but has not assessed whether its current acquisition approach is still cost-effective. For example, GSA abandoned its initial hosting strategy without evaluating the cost or schedule implications of doing so. The initial strategy to use a single contractor to provide consolidated hosting services was intended to achieve cost savings, but the revised approach, which relies on multiple contractors, has proven to be much more costly than expected and led to schedule delays. Hosting costs are now a primary impediment to moving forward because GSA cannot afford to purchase the hardware and software necessary to complete phases 2 and 3. In addition, according to program officials, GSA efforts to procure hosting hardware and software have resulted in 13 different contracts, the management of which has required additional program support resources. GSA also continues to pay the SAM development contractor, IBM, essentially the same amount called for in the original contract even though schedule delays have pushed work out into the future. IBM’s contract includes responsibility for designing as well as operating and maintaining SAM. According to the fixed-price contract, SAM will be developed in phases, yet the payment schedule specified that IBM was to be paid a set amount each month (approximately 3 percent of the total contract price) for all activities under the 36-month base contract. This payment schedule may have been appropriate under GSA’s initial plan, but the development schedule changed shortly after the contract was awarded and much of the work to migrate systems into SAM will occur much later than planned. While the SAM transition and migration schedules have changed considerably, GSA has not adjusted IBM’s payment schedule to reflect the current development schedule. For example, IBM was not responsible for operating any IAE systems until the Excluded Parties List System (EPLS) was transitioned to the SAM contract in July 2011—17 months into the contract. By that time, GSA had already paid IBM $6.3 million of the $20.3 million contract price for SAM operation and maintenance. GSA and IBM officials noted that payments to date have been for planning and preparing to migrate the legacy systems to SAM. However, under the original schedule, IBM would have performed these services as well as operated FPDS-NG for the same cost. Similarly, GSA has paid more than half of the contract price for phase 2 migration activities even though phase 2 is not scheduled to be completed until May 2014. We raised issues about the increasing cost with SAM, and the viability of the hosting approach and the development contract structure with GSA officials and they recently told us that GSA has initiated an internal review, called a TechStat, of IAE. A TechStat is intended to be an evidence-based review of underperforming information technology investment during which agency leadership reviews a program, examines performance, and develops corrective actions as necessary. Program officials said their current focus is on completing phase 1 of SAM, but they may revisit their hosting strategy once the phase is completed. GSA officials also told us that they will begin negotiating with IBM to change the contract to reflect current schedule changes and available funding. GSA’s effort to consolidate the IAE legacy systems into SAM has the potential to reduce agency costs, eliminate redundancy, and streamline government acquisition processes. Two years into development, however, SAM is in trouble due to higher costs that planned funding levels do not cover. Most of the cost growth seen to date is largely the result of mistakes the program made. Rather than using a consolidated hosting strategy as initially proposed, the program adopted a piecemeal approach involving multiple sources that will cost about $65 million more than expected. The need for additional resources to cover the increase in hosting costs, however, coincided with significant funding shortfalls in the past 2 years and now the program cannot afford to develop SAM as planned. Despite dramatically different circumstances marked by higher costs and constrained resources, GSA has not reassessed its business case for SAM. Specifically, GSA has not assessed whether developing SAM is still a better option than maintaining the status quo or whether the current development strategy, involving multiple vendors, is more cost-effective than using a single vendor. Ensuring there is a sound business case for moving forward will be critical before establishing an acquisition strategy to address the program’s problems. Also, while GSA has taken steps to reduce costs, by delaying development and deferring some costs to the future, there may be more that can be done to stretch available resources. For example, in light of higher hosting costs than expected, GSA has not reevaluated whether its hosting strategy is the most cost- effective approach. In addition, GSA has not modified the primary SAM development contract to align payments with program schedule delays. Although GSA officials recently indicated they will begin negotiating changes to the development contract, it continues to pay the contractor for operation and maintenance activities even though many of the IAE systems will not be migrated into SAM for several years. Tying contract payments to the migration of the data systems and schedule milestones would ensure that the government is not paying for work that has not yet been accomplished. To ensure that GSA has a sound approach for providing IAE services in the future, we recommend that the Administrator of GSA take the following two actions: Reassess the SAM business case to compare the costs and benefits of various alternatives such as: terminating SAM development and continuing to operate the legacy systems, maintaining the current acquisition approach to developing SAM, pursuing a different acquisition strategy for SAM, such as using a single contractor to develop and operate the system. If the results of this assessment support continuing the current acquisition approach, then: reevaluate the hosting strategy to ensure that it is the most cost- effective approach that can be supported with available resources, and take steps to ensure that the SAM development contract payments are more closely aligned with the program schedule and delivery of capabilities. We provided a draft of this report to GSA and OMB. In its written comments, GSA concurred with our recommendations and indicated that it will take appropriate action. GSA added that it has established an integrated project team that will reassess and develop a broad plan covering both SAM and the IAE program as a whole. GSA’s written comments appear in appendix III. GSA also provided technical comments that we incorporated, as appropriate. OMB informed us that it did not have comments on the draft. We are sending copies of this report to interested congressional committees, the Administrator of General Services and the Director of the Office of Management and Budget. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please call me at (202) 512-4841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine how the General Services Administration (GSA) developed the Integrated Acquisition Environment (IAE) initiative, we interviewed IAE officials and analyzed relevant documents. Specifically, we interviewed former and current IAE officials and the two Acquisition Committee for E- Gov (ACE) co-chairs from the Departments of Defense and Interior. These individuals described the acquisition strategy and governance structure that IAE developed in its early years. We verified these accounts with historical documents, such as internal newsletters and minutes from the ACE meetings that documented IAE’s development. To learn about the acquisition strategy IAE used to develop the System for Award Management (SAM), we interviewed IAE officials, reviewed IAE presentations, and analyzed SAM contract documents. We interviewed officials from IAE and the Office of Management and Budget (OMB) to learn about the program’s funding arrangement, obtained historical funding documents, and reviewed four of the interagency memorandums of understanding (MOU) used to fund IAE. To determine the progress IAE has made in implementing SAM, we interviewed IAE officials and two of the contractors that are implementing SAM—IBM and GCE. We also reviewed IAE presentations, agency memorandums and communications, and analyzed SAM-related contracts. Due to lack of a formal cost baseline when SAM development started, we focused on the growth of the individual contracts of SAM, such as the IBM contract and the help desk contract. To determine SAM’s schedule growth, we used the original schedule created by IBM shortly after the contract was awarded and compared that to the latest schedule IAE officials provided us. To understand and analyze the challenges IAE is facing in their consolidation, we interviewed officials from GSA, IAE, and OMB and analyzed SAM-related contracts. We also discussed IAE’s acquisition strategy with information technology contractors such as IBM and GCE. In order to understand IAE’s budget issues, we analyzed budget documents identifying projected funding and expenditures. We also analyzed the structure of IBM’s contract and verified our findings with IAE officials. We conducted this performance audit from September 2011 to March 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Central Contractor Registration (CCR) CCR originally was a Department of Defense (DOD) data system that was brought into the Integrated Acquisition Environment (IAE) portfolio in 2003 and adapted for use across the federal government. CCR is the primary registrant database for the U.S. government. The government uses CCR to collect, validate, store, and disseminate data in support of agency acquisition and award missions. According to the Federal Acquisition Regulation, prospective contractors must register in CCR prior to the award of a contract. Also, to register in CCR, a firm must have a Dun & Bradstreet Data Universal Number System (DUNS) number. The General Services Administration (GSA) has a contract with Northrop Grumman Information Technology to operate and maintain CCR. This contract ends September 2012. Electronic Subcontracting Reporting System (eSRS) The Electronic Subcontract Reporting System (eSRS) was created in 2005 and intended to streamline the small business subcontracting program reporting process and provide the data to agencies in a manner that will enable them to more effectively manage the program. The Small Business Administration partnered with the IAE and other agency partners to develop the eSRS system. The eSRS is an Internet-based reporting tool that eliminates the need for contractors to submit and process Individual Subcontracting Reports (SF 294) and Summary Subcontracting Reports (SF 295) in hard copy. In 2007, the eSRS implemented an interface with FPDS-NG, which permits contractors to enter a contract number into eSRS and have the contract data retrieved from FPDS-NG for use in the subcontracting reports. IAE has a contract with Symplicity, the original developer of eSRS, to provide operation and maintenance of eSRS. This contract will expire in September 2012, and IAE has plans to enter an interim contract with the same vendor until the system is migrated to SAM. Excluded Parties List System (EPLS) The purpose of EPLS is to provide a single comprehensive list of individuals and firms excluded from receiving federal contracts or federally approved subcontracts and from certain types of federal financial and nonfinancial assistance and benefits. Contracting officers use EPLS to determine whether to enter into a transaction with a specific contractor. EPLS is also available to the general public. In 2011, IBM assumed responsibility to maintain and operate EPLS under the System for Award Management (SAM) contract. Federal Business Opportunities (FedBizOpps) FedBizOpps is the single point of entry for federal buyers to publish and for vendors to find federal business opportunities over $25,000 across departments and agencies. Vendors can conduct ad hoc searches or set up automatic queries to notify them when opportunities meeting their criteria are posted. IAE has a contract with Symplicity to operate and maintain FedBizOpps. IAE plans to exercise the two option years on the current contract, signed in 2011, and to extend it again until FedBizOpps is migrated to SAM. Federal Agency Registration (FedReg) In response to GAO’s classification of intragovernmental transactions as a governmentwide material weakness, OMB and the IAE collaborated with DOD to create FedReg in 2003. FedReg collects standard data on federal agency buyers and sellers who perform intragovernmental transactions. FedReg sends data on buyers and sellers to the Intragovernmental Transaction Exchange and Intragovernmental Transaction System to assist in tracking all intragovernmental transactions. FedReg also serves as a sort of government “Yellow Pages,” providing information on federal sellers of goods and services. All federal entities engaged in intragovernmental buying or selling must be registered. FedReg is now embedded within CCR. GSA has a contract with Northrop Grumman Information Technology to operate and maintain FedReg (and CCR). This contract ends September 2012. Federal Procurement Data System – Next Generation (FPDS-NG) The Federal Procurement Data System-Next Generation is a database that provides information on government contracting actions over $3,000, procurement trends, and achievement of socioeconomic goals, such as small business participation. In fiscal year 2011, there were nearly 17,000,000 transactions recorded in FPDS-NG. FPDS-NG has been the primary governmentwide contracting database since 1978, and it serves as the backbone for other government contracting data systems. Since 1982, GSA has administered the database on behalf of the Office of Federal Procurement Policy. GSA awarded the FPDS-NG contract to Global Computer Enterprises, Inc., in 2011, and can exercise option years through 2015. Wage Determinations OnLine.Gov (WDOL) WDOL provides a single location for federal contracting officers to obtain Service Contract Act and Davis-Bacon Act wage determinations. These acts require contractors and subcontractors to pay no less than the locally prevailing wages for services contracts and public works projects. In addition to wage determinations, the site also provides information on labor standards, federal and agency acquisition regulations, agency contracting processes, and other related information. WDOL is physically maintained by the National Technical Information Service, an agency of the Department of Commerce. Online Representations and Certifications Application (ORCA) This application enables prospective government contractors to electronically submit required certifications and representations for responses to government solicitations for all federal contracts, instead of using hard copies for individual awards. The representations and certifications can be considered current for up to one year. These representations and certifications include certifications of socioeconomic status, affirmative action compliance, and compliance with veterans’ employment reporting requirements. IBM has been the vendor for ORCA since its inception in 2004. In 2011, IBM assumed responsibility to maintain and operate ORCA under the SAM contract. In addition to the contact name above, John Oppenheim (Assistant Director); Marie Ahearn; E. Brandon Booth; Jillian Fasching; Madhav Panwar; Jeffrey Sanders; Benjamin Shattuck; Roxanna Sun; Robert Swierczek; and Rebecca Wilson made key contributions to this report.
The U.S. Government spends more than $500 billion each year on contracts. To ensure contracts are managed effectively, the government has established policies and procedures for advertising, awarding, administering, and reporting on them. Historically, data systems used to implement these steps have been fragmented and duplicative, with multiple systems across different agencies providing similar services. The Integrated Acquisition Environment (IAE) was initiated in 2001 to bring together different data systems into a unified system. It is intended to reduce duplication and information technology costs, and create a more streamlined and integrated federal acquisition process. GAO was asked to assess (1) the acquisition strategy being used to develop IAE; (2) progress that has been made in consolidating IAE systems; and (3) any challenges that may affect the completion of IAE. GAO analyzed program costs, schedules, contracts, acquisition documents, and briefings, and interviewed IAE program officials and contractors. The development of IAE has occurred in two stages using different acquisition strategies. In 2001, GSA began establishing a portfolio of standardized government-wide data systems through an acquisition strategy known as “adopt, adapt, acquire.” GSA adopted or adapted existing agency-specific systems for government-wide use, or if no viable system met an identified need, GSA acquired a new system. These efforts resulted in a portfolio of nine data systems. In 2008, GSA began consolidating its portfolio of systems into one integrated system called the System for Award Management (SAM). In developing the system, GSA hoped to eliminate redundancy, reduce costs, and improve efficiency. Unlike the existing systems that were each designed, developed, and operated by a single contractor, IAE relies on multiple vendors to perform these same tasks for SAM. The intent of this approach is to enhance competition and innovation and for the government to own the software associated with the system. SAM will be developed in phases. In each phase, capabilities from selected IAE systems will be added to SAM and those legacy systems will be shut down. GSA has made progress in developing SAM and phase 1, consisting of three systems, is scheduled to be completed in May 2012. GSA also has established a computing center to host SAM and a help desk to support users. Since 2009, however, IAE costs have increased by $85 million, from about $96 to $181 million. Most of the cost growth is due to GSA omitting hardware and other key components in acquiring a hosting infrastructure for SAM. External factors, including recent statutory requirements and policy changes, also have contributed to higher costs by increasing the use of the IAE systems beyond what was anticipated. Higher costs led to the need to supplement existing funding, but the program did not receive all of the additional funding it requested. In response to rising costs and limited funding, GSA officials have delayed SAM’s development schedule by almost 2 years, and taken other actions to reduce or defer costs where possible. Higher costs and constrained resources pose a risk to IAE going forward. GSA will need to continue operating the legacy IAE systems and contend with higher SAM development costs for several more years. While GSA has taken some steps to reduce costs, it has not reevaluated the business case for SAM or determined whether it is the most cost effective alternative. Such a reevaluation is particularly important in light of the increased infrastructure costs, which are now a major impediment to completing SAM. In addition, although the SAM development phases have been pushed out several years, GSA has not modified its primary development contract to align the payment schedule with the delays. The program has continued to pay the same fixed price amount to the contractor for SAM development, operation, and maintenance even though there was little to operate and maintain for nearly 2 years. Aligning contract payments with schedule milestones will ensure that the government is not paying for work that has not yet been accomplished. GAO recommends that GSA reassess the IAE business case to determine whether the current acquisition strategy is the most cost effective alternative and if so, reevaluate the current hosting strategy and align contract payments with the program schedule. GSA agreed with GAO’s recommendations and indicated that it will take appropriate action.
The number of tax-related identity theft incidents (primarily refund or employment fraud attempts) identified by IRS has grown: 51,702 incidents in 2008, 169,087 incidents in 2009, and 248,357 incidents in 2010. Refund fraud can stem from identity theft when an identity thief uses a legitimate taxpayer’s name and Social Security Number (SSN) to file a fraudulent tax return seeking a refund. In these cases, the identity thief typically files a return claiming a refund early in the filing season, before the legitimate taxpayer files. IRS will likely issue the refund to the identity thief after determining the name and SSN on the tax return appear valid (IRS checks all returns to see if filers’ names and SSNs match before issuing refunds). IRS often first becomes aware of a problem after the legitimate taxpayer files a return. At that time, IRS discovers that two returns have been filed using the same name and SSN, as shown in figure 1. The legitimate taxpayer’s refund is delayed while IRS spends time determining who is legitimate. Employment fraud occurs when an identity thief uses a taxpayer’s name and SSN to obtain a job. IRS subsequently receives income information from the identity thief’s employer. After the victim files his or her tax return, IRS matches income reported by the victim’s employer and the thief’s employer to the tax return filed by the legitimate taxpayer, as shown in figure 2. IRS then notifies the taxpayer of unreported income because it appears the taxpayer earned more income than was reported on the tax return. Employment fraud causes tax administration problems because IRS has to sort out what income was earned by the legitimate taxpayer and what was earned by the identity thief. The name and SSN information used by identity thieves to commit refund or employment fraud are typically stolen from sources beyond the control of IRS. IRS officials told us they are unaware of any incidents where information was stolen from IRS and used to commit employment or refund fraud. However, there are risks at IRS. In a recent audit, we found that although IRS has made progress in correcting previously reported information security weaknesses, it did not consistently implement controls intended to prevent, limit, and detect unauthorized access to its systems and information, including sensitive taxpayer information. In 2009, we also reported that third-party software used to prepare and file returns may pose risks to the security and privacy of taxpayer information. IRS agreed with our recommendations to address these and other issues. We recently followed up with IRS on this issue and learned that IRS has begun monitoring adherence to security and privacy standards in the tax software industry. In 2004, IRS developed a strategy to address the problem of identity theft– related tax administration issues. According to IRS, the strategy has evolved and continues to serve as the foundation for all of IRS’s efforts to provide services to victims of identity theft and to reduce the effects of identity theft on tax administration. Indicators—account flags that are visible to all IRS personnel with account access—are a key tool IRS uses to resolve and detect identity theft. IRS uses different indicators depending on the circumstances in which IRS receives indication of an identity theft–related problem. Once IRS substantiates any taxpayer-reported information, either through IRS processes or the taxpayer providing documentation of the identity theft, IRS will place the appropriate indicator on the taxpayer’s account and will notify the taxpayer. IRS will remove an indicator after 3 consecutive years if there are no incidents on the account or will remove an indicator sooner if the taxpayer requests it. The three elements of IRS’s strategy are resolution, detection, and prevention. Resolution. Identity theft indicators speed resolution by making a taxpayer’s identity theft problems visible to all IRS personnel with account access. Taxpayers benefit because they do not have to repeatedly explain their identity theft issues or prove their identity to multiple IRS units. Indicators also alert IRS personnel that a future account problem may be related to identity theft and help speed up the resolution of any such problems. Since our 2009 report, IRS developed a new, temporary indicator to alert all IRS units that an identity theft incident has been reported but not yet resolved. IRS officials told us that they identified a need for the new indicator based on their ongoing evaluation of their identity theft initiatives. The temporary indicator’s purpose is to expedite problem resolution and avoid taxpayers having to explain their identity theft issues to multiple IRS units. As discussed in our 2009 report, taxpayers with known or suspected identity theft issues can receive assistance by contacting the Identity Protection Specialized Unit. The unit operates a toll-free number taxpayers can call to receive assistance in resolving identity theft issues. Detection. IRS also uses its identity theft indicators to screen tax returns filed in the names of known refund and employment fraud victims. During the 2009, 2010, and 2011 filing seasons, IRS screened returns filed in the names of taxpayers with identity theft indicators on their accounts. There are approximately 378,000 such taxpayers. In this screening, IRS looks for characteristics indicating that the return was filed by an identity thief instead of the legitimate taxpayer, such as large changes in income or a change of address. If a return fails the screening, it is subject to additional IRS manual review, including contacting employers to verify that the income reported on the tax return was legitimate. In addition to U.S. taxpayers with indicators on their accounts, IRS officials also told us that they screened returns filed in the name of a large number—about 350,000—of Puerto Rican citizens who have had their U.S. SSNs compromised in a major identity theft scheme. As of May 12, 2011, 216,000 returns filed in 2011 failed the screens and were assigned for manual processing. Of these, IRS has completed processing 195,815 and found that 145,537 (74.3 percent) were fraudulent. In January 2011, IRS launched a pilot program for tax year 2010 returns (due by April 15, 2011) using a new indicator to “lock” SSNs of deceased taxpayers. If a locked SSN is included on a tax return, the new indicator will prompt IRS to automatically reject the return. PIPDS officials told us they intend to expand the pilot to include more SSNs of deceased taxpayers after analyzing the results of the initial pilot. A program IRS uses to identify various forms of refund fraud—including refund fraud resulting from identity theft—is the Questionable Refund Program. IRS established this program to screen tax returns to identify fraudulent returns, stop the payment of fraudulently claimed refunds, and, in some cases, refer fraudulent refund schemes to IRS’s Criminal Investigation offices. Prevention. As described in our 2009 report, IRS has an office dedicated to finding and stopping online tax fraud schemes. IRS also provides taxpayers with targeted information to increase their awareness of identity theft, tips and suggestions for safeguarding taxpayers’ personal information, and information to help them better understand tax administration issues related to identity theft. Appendix I summarizes information IRS and FTC provide to taxpayers to protect themselves against identity theft. Since our 2009 report, IRS began a pilot program providing some identity theft victims with a 6-digit Identity Protection Personal Identification Number (PIN) to place on their tax return. IRS officials told us they created the PIN based on their ongoing evaluation of their identity theft initiatives. When screening future years’ returns for possible identity theft, IRS will exclude returns with a PIN, which will help avoid the possibility of a “false positive” and a delayed tax refund. IRS sent letters containing an identity theft PIN to 56,000 taxpayers in the 2011 filing season. IRS will provide taxpayers a new PIN each year for a period of 3 years following an identity theft. IRS’s initiatives to address identity theft are limited in part because tax returns and other information submitted to and, in some cases generated by, IRS are confidential and protected from disclosure, except as specifically authorized by statute. As discussed in more detail in our 2009 report, IRS can disclose identity theft–related events that occur on a taxpayer’s account to the taxpayer, such as the fact that an unauthorized return was filed using the taxpayer’s information or that the taxpayer’s SSN was used on another return. However, IRS cannot disclose to the taxpayer any other information pertaining to employment or refund fraud, such as the perpetrator’s identity or any information about the perpetrator’s employer. Additionally, IRS has limited authorities to share identity theft information with other federal agencies. When performing a criminal investigation, IRS can make only investigative disclosures, that is, the sharing of specific, limited information necessary for receiving information from other federal agencies that might support or further IRS’s investigation. Disclosure of taxpayer information to state and local law enforcement agencies is even more limited. Because of the timing of tax return filing, IRS is often unable to detect suspicious cases until well after the fraud occurred. Validating the identity theft and substantiating the victim’s identity takes further time. For example, IRS may not be able to detect employment fraud until after the following year’s tax filing deadline of April 15 when it matches income reported by employers against taxpayers’ filed returns. It is only after IRS notifies a taxpayer of unreported income that IRS may learn from the taxpayer that the income was not the taxpayer’s and that someone else must have been using his or her identity. By the time both the victim and IRS determine that an identity theft incident occurred, well over a year may have passed since the employment fraud. IRS officials told us that IRS pursues criminal investigations of suspected identity thieves in only a small number of cases. IRS’s Criminal Investigations (CI) Division’s investigative priorities include tax crimes, such as underreporting income from legal sources; illegal source financial crimes; narcotics-related financial crimes; and counterterrorism financing. In fiscal year 2010, CI initiated 4,706 investigations of all types, a number far smaller than the total number of identity theft–related refund and employment fraud cases identified in that year. Also, the decision to prosecute identity thieves does not rest with IRS. CI conducts investigations and refers cases to the Department of Justice (DOJ), which is responsible for prosecuting cases in the federal courts. IRS officials said that the small number of tax-related identity theft cases that they investigate recognizes that DOJ has to conclude that the case is of sufficient severity that it should be pursued in the federal courts before it will be prosecuted. According to data from CI included in our prior report, the median amount of suspected identity theft–related refunds identified in the 2009 filing season was around $3,400. CI has investigated tax-related identity theft cases that DOJ has successfully prosecuted. In our prior report we cited the example of a former Girl Scout troop leader serving 10 years in federal prison for stealing the SSNs of girls in her troop and then claiming more than $87,000 in fraudulent tax refunds. Options exist, now and in the future, to improve detection of identity theft–related tax fraud, but they come with trade-offs. Known identity theft victims. IRS could screen returns filed in the names of known identity theft victims more tightly than is currently done. More restrictive screening may detect more cases of refund fraud before IRS issues refunds. However, more restrictive screening will likely increase the number of legitimate returns that fail the screenings (false positives). Since returns that fail screening require a manual review, this change could harm innocent taxpayers by causing delays in their refunds. Using more restrictive rules would also place additional burden on employers because IRS contacts employers listed on all returns that fail screening. All taxpayers. Beyond screening returns with known tax-related identity theft issues, screening all tax returns for possible refund fraud would pose similar trade-offs, but on a grander scale. For example, as noted above, one way to check for identity theft is to look for significant differences between current year and prior year tax returns, but this could be confounded by a large number of false positives. IRS officials told us that in 2009 there were 10 million address changes, 46 million changes in employer, and millions of deaths and births. Checking all returns that reflect these changes for possible refund fraud could overwhelm IRS’s capacity to issue refunds to legitimate taxpayers in a timely manner. Looking Forward. IRS’s identity protection strategy and the creation of PIPDS were part of an effort to more efficiently identify refund and employment fraud as well as to assist innocent taxpayers. Since adopting the recommendation in our 2009 report regarding using performance measures to assess effectiveness, IRS has followed through, using its improved performance information to identify additional steps it could take. These include the new indicators for taxpayer accounts, improved routing of suspect returns, and PIN numbers. However, none of these steps will completely eliminate refund or employment fraud. By continuing to monitor the effectiveness of its identity theft initiatives, IRS may find additional steps to reduce the problems faced by both taxpayers and IRS. Looking further forward, other long-term initiatives underway at IRS have at least some potential to help combat identity theft–related fraud. In April 2011, the Commissioner of Internal Revenue gave a speech about a long- term vision to increase up-front compliance activities during returns processing. One example is to match information returns with tax returns before refunds are issued. Before this could happen, IRS would have to make significant changes. Third-party information returns would have to be filed with IRS earlier in the filing season. IRS would also have to improve its automated processing systems; IRS’s current Customer Account Data Engine (CADE 2) effort is one key step. While these efforts are part of a broad compliance improvement vision, they could also detect some identity theft–related fraud. If, for example, IRS could match employer information to tax returns before refunds are issued, identity thieves could not use phony W-2s to claim fraudulent refunds. Chairman Nelson, Ranking Member Crapo, and Members of the Subcommittee, this completes my prepared statement. I would be happy to respond to any questions you may have at this time. For further information on this testimony, please contact James R. White at (202) 512-9110 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. In addition to the individual named above, David Lewis, Assistant Director; Shannon Finnegan, analyst-in-charge; Michele Fejfar; Donna Miller; Erika Navarro; Melanie Papasian; and Sabrina Streagle made key contributions to this report. Both the Internal Revenue Service (IRS) and the Federal Trade Commission (FTC) provide helpful information to taxpayers to deter, detect, and defend against identity theft. IRS provides taxpayers with targeted information to increase their awareness of identity theft, tips and suggestions for safeguarding taxpayers’ personal information, and information to help them better understand tax administration issues related to identity theft. For example, IRS has published on its website the list in table 1 below. The FTC operates a call center for identity theft victims where counselors tell consumers how to protect themselves from identity theft and what to do if their identity has been stolen (1-877-IDTHEFT ; TDD: 1-866-653-4261; or www.ftc.gov/idtheft). The FTC also produces publications on identity theft, including Take Charge: Fighting Back Against Identity Theft. This brochure provides identity theft victims information on 1. immediate steps they can take, such as placing fraud alerts on their credit reports; closing accounts; filing a police report; and filing a complaint with the FTC; 2. their legal rights; 3. how to handle specific problems they may encounter when clearing their name, including disputing fraudulent charges on their credit card accounts; and 4. minimizing recurrences of identity theft. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Identity theft is a serious and growing problem in the United States. Taxpayers are harmed when identity thieves file fraudulent tax documents using stolen names and Social Security numbers. In 2010 alone, the Internal Revenue Service (IRS) identified over 245,000 identity theft incidents that affected the tax system. The hundreds of thousands of taxpayers with tax problems caused by identity theft represent a small percentage of the expected 140 million individual returns filed, but for those affected, the problems can be quite serious. GAO was asked to describe, among other things, (1) when IRS detects identity theft based refund and employment fraud, (2) the steps IRS has taken to resolve, detect, and prevent innocent taxpayers' identity theft related problems, and (3) constraints that hinder IRS's ability to address these issues. GAO's testimony is based on its previous work on identity theft. GAO updated its analysis by examining data on identity theft cases and interviewing IRS officials. GAO makes no new recommendations but reports on IRS's efforts to address GAO's earlier recommendation that IRS develop performance measures and collect data suitable for assessing the effectiveness of its identity theft initiatives. IRS agreed with and implemented GAO's earlier recommendation. Identity theft harms innocent taxpayers through employment and refund fraud. In refund fraud, an identity thief uses a taxpayer's name and Social Security Number (SSN) to file for a tax refund, which IRS discovers after the legitimate taxpayer files. In employment fraud, an identity thief uses a taxpayer's name and SSN to obtain a job. When the thief's employer reports income to IRS, the taxpayer appears to have unreported income on his or her return, leading to enforcement action. IRS has taken multiple steps to resolve, detect, and prevent employment and refund fraud: Resolve--IRS marks taxpayer accounts to alert its personnel of a taxpayer's identity theft. The purpose is to expedite resolution of existing problems and alert personnel to potential future account problems. Detect--IRS screens tax returns filed in the names of known refund and employment fraud victims. Prevent--IRS provides taxpayers with information to increase their awareness of identity theft, including tips for safeguarding personal information. IRS has also started providing identity theft victims with a personal identification number to help identify legitimate returns. IRS's ability to address identity theft issues is constrained by (1) privacy laws that limit IRS's ability to share identity theft information with other agencies; (2) the timing of fraud detection--more than a year may have passed since the original fraud occurred; (3) the resources necessary to pursue the large volume of potential criminal refund and employment fraud cases; and (4) the burden that stricter screening would likely cause taxpayers and employers since more legitimate returns would fail such screening.
Farming has always been an inherently risky enterprise because farmers operate at the mercy of nature and frequently are subjected to weather-related perils such as droughts, floods, hurricanes, and other natural disasters. Since the 1930s, many farmers have been able to transfer part of the risk of loss in production to the federal government through subsidized crop insurance. Major legislation enacted in 1980 and 1994 restructured the crop insurance program. The 1980 legislation enlisted, for the first time, private insurance companies to sell, service, and share the risk of federal insurance policies. Subsequently, in 1994, the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act revised the program to offer farmers two primary levels of insurance coverage, catastrophic and buyup. Catastrophic insurance is designed to provide farmers with protection against extreme crop losses for a small processing fee. Buyup insurance provides protection against more typical and smaller crop losses in exchange for a producer-paid premium. The government subsidizes the total premium for catastrophic insurance and a portion of the premium for buyup insurance. Farmers who purchase buyup crop insurance must choose both the coverage level (the proportion of the crop to be insured) and the unit price (such as, per bushel) at which any loss is calculated. With respect to the level of production, farmers can choose to insure as much as 75 percent of normal production or as little as 50 percent of normal production at different price levels. With respect to the unit price, farmers choose whether to value their insured production at USDA’s full estimated market price or at a percentage of the full price. In recent years, USDA has introduced a new risk management tool called revenue insurance. Unlike traditional crop insurance, which insures against losses in the level of crop production, revenue insurance plans insure against losses in revenue. The plans protect the farmer from the effects of declines in crop prices or declines in crop yields, or both. Like traditional buyup insurance, the government subsidizes a portion of the premiums. One of the plans, called Crop Revenue Coverage, is available in many states for major crops. Two other plans, called Income Protection and Revenue Assurance, are available to farmers in only limited areas. USDA reimburses the insurance companies for the administrative expenses associated with selling and servicing crop insurance policies, including the expenses associated with adjusting claims. Between 1995 and 1998, USDA paid participating insurance companies about $1.7 billion in administrative expense reimbursements. In addition to receiving an administrative expense reimbursement, the insurance companies share underwriting risk with USDA and can earn or lose money according to the claims they must pay farmers for crop losses. Companies earn underwriting profits when the premiums exceed the crop loss claims paid for those policies on which the companies retain risk. They incur underwriting losses when the claims paid for crop losses exceed the premiums paid for the policies that the companies retained. Between 1995 and 1998, USDA paid participating insurance companies about $1.1 billion in underwriting profits. Critical to the success of achieving an actuarially sound crop insurance program is aligning premium rates with the risk each farmer represents. The riskiness of growing a particular crop varies from location to location, from farm to farm, and from farmer to farmer. If the rates are too high for the risk represented, farmers are less likely to purchase insurance, lowering the revenue from premiums and the usefulness of the program to farmers. Conversely, if the rates are too low, farmers are more likely to purchase crop insurance, but because the rates are too low, the revenue from premiums will be insufficient to cover the claims. Therefore, USDA sets different premium rates for the various coverage and production levels, which vary by crop, location, farm, and farmer. Consequently, hundreds of thousands of premium rates are in effect. To set premium rates, USDA calculates a basic rate for each crop in each county for the farmers who buy insurance at the 65-percent coverage level and whose normal production level is about equal to the average production in the county. From this basic rate, USDA makes adjustments to establish rates for other coverage levels and for those farmers whose production levels are higher or lower than the county’s average. In 1995, we reported that for the six crops we reviewed—barley, corn, cotton, grain sorghum, soybeans, and wheat—basic premium rates overall were 89 percent adequate, on average, to meet the Congress’s legislative requirement of actuarial soundness. However, we found that while overall premiums were approaching actuarial soundness, USDA’s rates for some crops and locations and for some coverage and production levels were too low. For the 183 state crop programs we examined, 54 had basic premium rates that were adequate to achieve actuarial soundness. These 54 programs were generally those that had the greatest volume of insurance. For the remaining 129 programs, 40 had premium rates that were near the target level. However, the other 89 programs, representing about 24 percent of the crop insurance premiums for the six crops in 1994, had basic rates that were less than 80 percent adequate for actuarial soundness. We reported that premium rates that were too low generally occurred when the historical databases used for establishing rates added or deleted years of severe losses, thus affecting USDA’s estimate of expected crop losses. USDA did not increase the rates where necessary. For example, for one of the crops we reviewed, USDA did not increase the rates as much as it could have when (1) severe losses from 1993 were added to the database for establishing the 1995 rates and (2) a year from the 1970s when losses were lower was deleted from the database. According to USDA, it had not sufficiently raised rates out of concern that higher rates would discourage farmers from buying crop insurance. Furthermore, when we examined the rates at various levels of coverage and production, we found that the rates were (1) too high for coverage at the 75-percent level and (2) too low for farmers with above-average crop yields. As a result, the rates for both coverage and production levels were not always aligned with risk. This occurred because USDA did not periodically review and update the calculations it used to adjust rates above and below the basic rate. To set premium rates for the 75-percent coverage level, USDA applies pre-established mathematical factors to the basic rate. However, these factors have not resulted in rates that are aligned with risk. For crops insured at the 75-percent coverage level, USDA set premium rates ranging from 19 to 27 percent more than required. As a result, the 1994 income from premiums was about $30 million more than required for this coverage. Although grain sorghum had the greatest percentage of rates in excess of those required, corn had the greatest amount of additional premium income because its program is much larger. USDA also adjusts the basic rates for a farmer’s individual crop yields. USDA’s basic rate applies to the farmer whose average yield is about equal to the average for all producers in the county. However, many farmers’ average yield is above or below the county’s average, and USDA’s research shows that the higher a farmer’s yield, the lower the chance of a loss. Therefore, USDA establishes rates for different yield levels using a mathematical model. The rates per $100 of insurance coverage decrease as a farmer’s average yield increases; however, the mathematical model overstated the rate decrease. According to our analysis, the rates at higher average crop yields were too low for the six crops reviewed. We reported that for these above-average yields, USDA’s rates in 1995 should have been from 13 to 33 percent higher than they were. Subsequent to our 1995 report, USDA took action to increase premium rates an average of 6 percent and developed a plan to periodically evaluate the mathematical factors used to set rates. These actions have contributed to the federal crop insurance program’s achieving a loss ratio well below the target in recent years, thereby improving the program’s financial soundness. However, although overall premium rates appear adequate, rates for crops in some states remain too low. For example, since 1996, the loss ratio has averaged 1.36 for cotton in Texas and 1.45 for peanuts in Alabama, well exceeding the target loss ratio. Thus, premium rates for these farmers may be too low. Consequently, USDA needs to continue to monitor and adjust premium rates to ensure they are appropriately aligned with risk. In 1997, we reported that USDA’s administrative expense reimbursements to participating insurance companies selling traditional buyup insurance—31 percent of premiums—were much higher than the expenses that can be reasonably associated with the sale and service of federal crop insurance. For the 2-year period we reviewed, 1994 and 1995, the companies reported $542.3 million in expenses, compared with a reimbursement of $580.2 million—a difference of about $38 million. Additionally, about $43 million of the companies’ reported expenses could not be reasonably associated with the sale and service of federal crop insurance to farmers. Therefore, we reported that these expenses should not be considered in determining an appropriate future reimbursement rate for administrative expenses. The expenses that could not be reasonably associated with the sale and service of federal crop insurance included the following: payments of $12 million to compensate executives of an acquired company to refrain from joining or starting competing companies, fees of about $11 million paid to other insurance companies to protect against underwriting losses, bonuses of about $11 million tied to company profitability, management fees of about $1 million assessed by parent companies with no identifiable benefit to subsidiary crop insurance companies, and lobbying expenditures of about $400,000. In addition, we found a number of expenses reported by the companies that, while in categories associated with the sale and service of crop insurance, seemed to be excessive under a taxpayer-supported program. These expenses included agents’ commissions of about $6 million, paid by one company, that exceeded the industry standard. Thus, we reported that opportunities existed for the government to reduce its reimbursement rate for administrative expenses while still adequately reimbursing companies for the reasonable expenses of selling and servicing crop insurance policies. Subsequent to our report, the Agricultural Research, Extension, and Education Reform Act of 1998 revised reimbursement rates downward to 24.5 percent of premiums for traditional buyup insurance. However, as changes are made to the crop insurance program that increase participation and sales volume, further downward adjustments to the reimbursement rate may be warranted. We also reported that although the current arrangement for reimbursing companies for their administrative expenses has certain advantages, including ease of administration, expense reimbursements based on a percentage of premiums do not necessarily reflect the amount of work involved to sell and service crop insurance policies and may create incentives to focus sales to larger farmers. Alternative reimbursement arrangements, such as (1) capping the reimbursement per policy and (2) paying a flat dollar amount per policy plus a reduced fixed percentage of premiums, offer the potential to have reimbursements more reasonably reflect expenses and encourage more service to smaller farmers than does the current arrangement. While these alternative reimbursement methods may result in lower cost reimbursements to insurance companies, they may increase USDA’s own administrative expenses for reporting and compliance. In 1995, we found that companies generally preferred USDA’s current reimbursement method because of its administrative simplicity. In 1997, we also reported that the government’s costs to deliver catastrophic insurance policies in 1995 were higher through private companies than through the local offices of USDA’s Farm Service Agency. The basic cost to the government for selling and servicing catastrophic crop insurance was comparable for both delivery systems. However, when private companies delivered the insurance, they received an estimated $45 million underwriting gain, which did not apply to USDA’s delivery. Underwriting gains are not guaranteed and vary annually, depending on crop losses. Our report did not conclude or recommend the insurance industry should have its role in catastrophic insurance delivery reduced. However, we recommended that USDA needs to more closely monitor the level of underwriting gain paid to the participating insurance companies. For 1996, 1997, and 1998, underwriting gains for catastrophic coverage totaled $58 million, $87 million, and $105 million, respectively. Beginning with crops harvested in 1997, the Federal Agriculture Improvement and Reform Act of 1996 required that USDA phase out its delivery of catastrophic crop insurance in areas that have sufficient private company providers. In May 1997, the Secretary of Agriculture authorized the movement of all catastrophic insurance policies away from USDA to commercial delivery. In 1998, we reported shortcomings in the way premium rates are established for each of the three revenue insurance plans we reviewed. Appropriate methods for setting rates for these plans are critical to ensuring the financial soundness of the crop insurance program over time. We reported that the Crop Revenue Coverage plan did not base its rate structure on the interrelationship between crop prices and farm-level yields—an essential component of actuarially sound rate setting. For example, a decline in yields is often accompanied by an increase in prices, which mitigates the impact of the decline in yields on a farmer’s revenue. Because this plan did not recognize this interrelationship, the premium adjustments may not be sufficient over the long term to cover claims payments and may not be appropriate to the risk each farmer presents. We were not able to determine whether premium rates for this plan were too high or too low. In contrast, the rate-setting approaches for the Revenue Assurance and Income Protection plans were based on a likely statistical distribution of revenues that reflects the interrelationship between crop prices and yields. However, the two plans had several shortcomings that were not as serious as the problem we identified for Crop Revenue Coverage. For example, in constructing its revenue distribution, we found that the Revenue Assurance plan used only 10 years of yield data (1985-94), which was not a sufficient historical record to capture the fluctuations in yield over time. Furthermore, 3 of these 10 years had abnormal yields: 1988 and 1993 had abnormally low yields, and 1994 had abnormally high yields. Additionally, Income Protection based its estimate of future price increases or decreases on the way that prices moved in the past. This approach could be a problem because price movements in the past occurred in the context of past government programs, such as commodity income-support payments, which were eliminated by the 1996 farm bill. In the absence of the above government programs, the price movements may have been considerably more pronounced. While favorable weather and stable crop prices generated very favorable claims experience over the first 2 years that the plans were available to farmers, these shortcomings raise questions about whether the rates established for each plan will be actuarially sound and fair—that is, appropriate to the risk each farmer presents over the long term. Furthermore, while the plans were initially approved only on a limited basis, USDA authorized the substantial expansion of Crop Revenue Coverage before the initial results of claims experience were available. In doing so, USDA was acting within its authority to approve privately developed crop insurance plans in response to strong demand from farmers. USDA’s Office of General Counsel advised against the expansion, noting that an expansion without any data to determine whether the plans or rates are sound might expose the government to excessive risk. While Crop Revenue Coverage was expanded rapidly, Revenue Assurance and Income Protection essentially remain pilot plans with no nationwide availability. As a result of the shortcomings with the revenue insurance plans’ rating methods and to ensure premiums were appropriate to the risk each farmer presents, we recommended that the Secretary of Agriculture direct the Administrator of the Risk Management Agency to address the shortcomings in the methods used to set premiums. Specifically, with respect to all three plans, we recommended that the Secretary direct the Risk Management Agency to reevaluate the methods and data used to set premium rates to ensure that each plan is based on the most actuarially sound foundation. With respect to Crop Revenue Coverage, which does not incorporate the interrelationship between crop prices and farm-level yields, we recommended that the Risk Management Agency direct the plan’s developer to base premium rates on a revenue distribution or another appropriate statistical technique that recognizes this interrelationship. While USDA subsequently took action to improve the actuarial soundness of the Revenue Assurance plan, it has not, to date, acted on our recommendations regarding the other two plans. As the Congress considers proposals to reform the federal crop insurance program and improve the safety net for farmers, the issues and some of the recommendations in our reports remain important to the success of the program. Specifically, premiums in all areas of the country should be set at levels that are actuarially sound and represent the risk each farmer brings to the program. Furthermore, continued oversight of the reasonableness of the program’s administrative reimbursement rate is necessary. Increased program participation and sales volume that could result from crop insurance reform may lead to lower delivery costs, warranting a downward adjustment in the rate. In addition, USDA needs to closely monitor the catastrophic insurance program to ensure that over time the underwriting gain earned by insurance companies is not excessive. Finally, before revenue insurance plans are expanded to cover new crops, USDA needs to ensure that the plans are based on an actuarially sound foundation. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Department of Agriculture's (USDA) federal crop insurance program, focusing on whether USDA: (1) has set adequate insurance rates to achieve the legislative requirement of actuarial soundness; (2) appropriately reimburses participating crop insurance companies for their administrative costs; (3) can deliver catastrophic crop insurance at less cost to the government than private insurance companies; and (4) has established methodologies in the revenue insurance plans that set sound premium rates. GAO noted that: (1) GAO has reported that several aspects of USDA's crop insurance program are of concern and need attention; (2) in 1995, GAO reported that premiums charged farmers for crop insurance were not adequate to achieve the actuarial soundness mandated by Congress; (3) GAO's review showed that the basic premium rates for the six crops reviewed were approaching actuarial soundness in 1995, but USDA's rates for some crops and locations and for some coverage and production levels were well below the legislative requirement; (4) about 24 percent of the crop insurance premiums for the six crops GAO reviewed had basic rates that were less than 80-percent adequate for actuarial soundness; (5) USDA subsequently took actions to improve the program's actuarial soundness, but some rates remain too low; (6) the government's administrative expense reimbursement to insurance companies--31 percent of premiums--were greater than the companies' reported expenses to sell and service federal crop insurance; (7) GAO stated that some of these reported expenses did not appear to be reasonably associated with the sale and service of federal crop insurance; (8) the Agricultural Research, Extension, and Education Reform Act of 1998 subsequently revised reimbursement rates downward to 24.5 percent of premiums for most crop insurance; (9) increased program participation and sales volume that could result from crop insurance reform may lead to lower delivery costs, warranting a downward adjustment in the rate; (10) GAO reported that the government's costs to deliver catastrophic insurance in 1995 were higher through private companies than through USDA; (11) although the basic costs associated with selling and servicing catastrophic crop insurance through USDA and private companies were comparable, delivery through USDA avoids paying an underwriting gain to companies in years when there is a low incidence of catastrophic loss claims; (12) GAO reported its doubts about whether new USDA-supported revenue insurance plans will be actuarially sound over the long term and appropriate to the risk each farmer presents to the program; and (13) with respect to the most popular plan, Crop Revenue Coverage, GAO recommended that USDA's Risk Management Agency require the plan's developer to base premium rates on a revenue distribution or other appropriate statistical technique that recognizes the interrelationship between farm-level yields and expected crop prices.
A copyright is an intellectual property interest in an original work of authorship fixed in any tangible medium of expression, including books, movies, photographs, and music, from which the work can be perceived, reproduced, or otherwise communicated either directly or with the aid of a machine or device. The Copyright and Patents’ clause of the U.S. Constitution authorizes Congress to “promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” In the music industry, copyrights confer on their owners certain exclusive rights, such as the right to authorize or control the reproduction, distribution, and public performance of a piece of music. The reproduction and distribution of recorded music includes the sale of copies in a variety of formats, such as compact discs (CD), vinyl records, and digital downloads. The public performance of music may include broadcast radio transmissions or digital transmission, such as transmissions on AM or FM radio or satellite radio. Copyright law applies to recorded music in two ways: the musical work and the sound recording of that work. The musical work refers to the notes and lyrics of a song, and the copyright holder is often the publisher, songwriter, or composer. The performance of the lyrics and melody in a fixed recording, such as the recording on a CD or vinyl record, are protected as the sound recording. Record companies are often the owners of the copyright to the sound recording. Typically, separate individuals or entities hold the copyrights for the musical work and sound recording of a piece of music, although one individual or entity can hold both copyrights. For example, the song, “I Will Always Love You,” was part of the soundtrack for the movie, The Bodyguard, in 1992. The copyright holder of the musical work is the songwriter, Dolly Parton, who owns both the words and music. However, the copyright holder of the sound recording, as performed by Whitney Houston, is the record company, Sony Music, to whom the soundtrack is registered. Copyright holders may use a license to grant third parties legal permission to use musical works and sound recordings. A license provides legal permission for the use of copyrighted material by a group or an individual other than the copyright holder. Permission for the use of the material typically requires the payment of a royalty and compliance with other conditions of the license. As shown in table 2, third parties, such as AM and FM broadcast radio, satellite radio, and Internet radio, must obtain a license for the public performance of a copyrighted musical work. However, under current law, copyright protection does not apply and, therefore, a license is not required to play sound recordings over broadcast radio. Royalties for the public performance of musical works and sound recordings are collected and distributed by performing rights organizations (PRO) and Sound Exchange, respectively. PROs such as The American Society of Composers, Authors, and Publishers (ASCAP), Broadcast Music, Inc. (BMI), and SESAC, negotiate licenses and distribute royalties for the public performance of musical works. These PROs represent songwriters, publishers, and other copyright holders of musical works. Sound Exchange, which was originally established by the Recording Industry Association of America (RIAA), is now an independent nonprofit organization that negotiates and administers licenses and royalties for the public performance of the sound recording for digital transmissions, such as satellite radio. Sound Exchange represents record companies, featured musicians and performers, and other copyright holders of sound recordings. Various individuals and groups from the recording industry are involved with the creation of music and receive revenues from royalties and sales. The featured musicians and performers are the bands and artists whose work is heard on broadcast radio and whose sound recordings are available for purchase. Session or background musicians and performers are the individuals who primarily work in recording studios and perform the music heard on a recording or provide background vocals to a recording. In addition, songwriters, composers, and publishers are involved with writing the words and melody of a song. These individuals and groups share in the revenues generated through royalties paid by broadcast radio and digital music services, and from record sales. Figure 1 shows how recording industry revenues are distributed among the various entities involved in the creation of a recording. According to RIAA, since the late 1990s, the recording industry has experienced declining album sales. As shown in figure 2, revenue from the sale of physical albums, such as CDs and cassettes, has declined by approximately 60 percent from 1999 to 2008. Several factors related to the development of digital technology have contributed to this decline. First, consumers increasingly purchase singles instead of albums. The sale of digitally downloaded music, which represented approximately 30 percent of sales in 2008, has partially offset the decline in physical sales; however, the revenue generated from digital sales has not fully offset the revenue lost due to the decline in physical album sales because most digital downloads are single songs, which often sell for 99 cents, and not albums, which often sell for $10 or more. Second, stakeholders with whom we spoke said that illegal downloading, and the ability to acquire music on- demand, without paying for a copy to be retained, has led to a culture where younger listeners may expect to obtain music at no or minimal cost. Third, technologies, such as the Internet, enable listeners to hear music on-demand without buying it; this technology has shifted listeners’ behavior to music “access” and away from the purchasing behavior that historically supported the recording industry. According to the Copyright Office, these factors appear to represent permanent changes, and not temporary changes caused by current economic conditions. As of November 2009, the broadcast radio industry in the United States consists of 14,441 licensed broadcast radio stations in operation. Of all licensed stations in operation, nearly 70 percent of stations have music formats, and almost 20 percent have nonmusic formats such as news, talk, or sports; 77 percent of stations are commercial and 23 percent are noncommercial (see table 3). Since 2006, the broadcast radio industry has experienced declining advertising revenue. As shown in figure 3, from 2003 through 2009, radio industry annual revenues have declined 24 percent from their peak of $18.1 billion. For commercial broadcast radio stations, advertising represents the primary source of revenue, and stakeholders indicated two factors that have contributed to the decline in the radio industry’s advertising revenue: the current decline in the economy and the fragmentation of consumers across a greater number of media platforms, such as the Internet and mobile devices. The broadcast radio industry benefits from its relationship with the recording industry by using sound recordings to attract listeners which, in turn, generates advertising revenue for commercial radio stations. Advertising is the primary source of revenue for commercial radio stations, and the average annual revenues of music stations are $225,000 higher than the average annual revenues of nonmusic stations. The recording industry may benefit by receiving broadcast radio airplay, which can promote music sales. Industry stakeholders believe that radio airplay can promote sales, and past and current business practices support this conclusion. However, we found the relationship between radio airplay and sales to be unclear. Broadcast radio stations use content to attract listeners and generate revenue from advertisers that seek to reach listeners. As mentioned earlier, advertising is the primary source of revenue for commercial broadcast radio stations, and sound recordings are a form of content that can attract listeners. Radio stations use content to attract as many listeners as possible and an audience whose demographics will appeal to advertisers, as this will help stations maximize revenues. The rates that a station obtains for advertising time depend on the station’s ability to attract listeners in the advertiser’s target demographic segment, the length of the advertisement spot, and the size of the market, with stations in larger markets typically receiving higher rates than those in smaller markets. For example, a station that attracts a large market share of adult female listeners will be more desirable to advertisers selling a product targeted to adult females. Broadcast radio stations generate more revenue from music than other types of content, notably in markets with a large audience. At an aggregate level, we found that approximately 70 percent of commercial radio stations broadcast music, itself an indication of the popularity of music radio, and that these stations generated approximately 80 percent of all commercial broadcast radio revenues. Thus, at an aggregate level, radio stations that broadcast music generate more revenues than stations using other forms of content. We also estimated revenues at the station level. Controlling for factors that influence a station’s revenues, such as strength of the station’s signal, we found that, on average, stations with a music format generated approximately $225,000 more in annual revenues than nonmusic stations. However, this difference can vary based on the size of the population that the station serves. As shown in table 4, a music station with a coverage population of approximately 313,000 or more individuals (representing the top quartile of stations based on coverage population), will generate, on average, approximately $826,000 more in annual revenues than a nonmusic station, while a music radio station with a coverage population of approximately 26,000 individuals or less (representing the smallest quartile of stations based on coverage population), will earn on average approximately $206,000 more in annual revenues than a nonmusic station. Broadcast radio industry stakeholders acknowledged that they benefit from using music as content, but said that they already provide remuneration by purchasing musical work licenses. As previously indicated, music has two types of copyright protections, the musical work and the sound recording. Broadcast radio stations purchase a license for the use of the musical work, which allows radio stations to legally broadcast music. The cost for individual radio stations to purchase a musical work license varies, but we estimate the industry pays approximately 3 percent of its annual revenues to purchase musical work licenses. Broadcast radio stations also benefit from and provide compensation for nonmusic content, such as syndicated programming. The mechanism that broadcast radio stations use to provide compensation for nonmusic content differs from that of music content. Broadcast radio industry stakeholders with whom we spoke said that the cost of syndicated programs, such as those hosted by Rush Limbaugh and Alan Colmes, are typically negotiated with each station by the programmer. The negotiated price depends on the station’s audience size, among other factors. According to one broadcast industry stakeholder, radio stations with smaller audiences generally pay lower licensing fees. Industry stakeholders also told us that in addition to the licensing fee, some syndicated programs require stations to provide advertising time during the program with the programmer receiving revenues from the advertising. Because these contracts are private and stations do not report revenues for specific programs, we are unable to determine the relative costs and benefits stations derive from syndicated programs. Stakeholders from both the recording and broadcast radio industries agree that broadcast radio airplay can promote music sales, and past and current industry practices support this conclusion. A 2010 Arbitron study, as well as stakeholders from both the recording and radio industries, indicates that broadcast radio is the most common means by which listeners discover new sound recordings. Broadcast radio stations facilitate this discovery process by announcing artists’ new albums before or after broadcasting sound recordings. Also, repeated airplay increases exposure and raises awareness of sound recordings. Stakeholders told us that as listeners’ awareness increases, record companies and musicians benefit from corresponding increases in album sales. Furthermore, record companies’ past and current business practices imply that the recording industry benefits from broadcast radio airplay. The historical record of illegal payola activity shows that the recording industry has been willing to compensate the broadcast radio industry for airplay. In addition, record companies employ staff dedicated to the promotion of music to radio stations. To assess the relationship between broadcast radio airplay and music sales, we conducted several empirical analyses, and found the relationship to be unclear. Airplay and sales of digital singles. We found no consistent pattern between the cumulative broadcast radio airplay and the cumulative number of digital single sales. We tracked the spins and sales of 12 songs selected based on age and genre, among other factors, in the 10 largest DMAs for the first quarter of 2010 (see table 5). The songs consisted of sound recordings by different artists, across different genres, and of different ages. We compared each song’s spin count against the digital sales of the single. Although the current songs in our sample consistently received more airplay than catalog (i.e., older) songs of the same genre, we found that the digital single sales per spin vary widely. For example, a recently released Latin song was played on broadcast radio over 4,600 times but sold less than 1 digital single per spin. In contrast, an R&B/Hip Hop song released more than 9 years ago received fewer than 1,100 spins but sold almost 13 digital singles per spin. Airplay and initial album release. We found the relationship between national sales of a newly released album and national airplay of all songs on the album to be unclear. We examined a sample of six albums released between February 1 and February 14, 2010 (for a full description of all albums sampled, see appendix III). We found that album sales peaked shortly after the album’s release then decreased, irrespective of artist. For example, as shown in figure 4 below, Sade’s “Soldier of Love” album sold more than nine times as many copies in the week it was released as were sold 1 month later. The relationship between (1) the broadcast radio airplay preceding and immediately following the album release and (2) these album sales is unclear. While the sound recordings from each album received airplay prior to the albums’ releases, we are unable to quantify how much, if any, of the initial spike in album sales was attributable to broadcast radio airplay. Further, in the weeks following the release of the album, national radio airplay varied widely and did not follow the same pattern as national album sales. In the example above, the broadcast radio airplay of Sade’s album remained relatively constant preceding and immediately following the release of the album although the album sales did not follow the same pattern. Another album, H.I.M’s “Screamworks”, had sales decrease 72 percent the week after sales peaked, while airplay in the weeks following fluctuated and even increased. Changes in airplay and sales. We found the relationship between changes in national airplay and changes in national album sales to be unclear. We gathered airplay and sales data on the top songs receiving airplay from five categories of music—Current Album, Current Country, R&B, Latin, and New Artists. Using these data, we first examined the correlation between album sales and airplay. We found the sales of albums to be slightly correlated with past airplay only for country albums; however, these correlations do not imply that airplay contributed to album sales. Second, we conducted an econometric analysis where we regressed the percentage change in weekly sales on the percentage change in the present and prior week’s airplay, the percentage change in the prior week’s sales, the total airplay received by an album since its release, and the total physical and digital sales since its release. (See appendix IV for full information on the econometric analysis.) We performed this analysis using data from an 8 week period from February to April, 2010. We found that the percentage change in weekly airplay during the present and prior week generally did not have an impact on the percentage change in weekly sales. In particular, the estimates of the effect of the percentage change in the prior week’s airplay on the percentage change in sales were mixed (some positive and some negative) and not statistically significant, and the estimates of the effect of the percentage change in the present week’s airplay were positive but not statically significant. We also examined whether cumulative airplay since the album’s release had any effect on sales and found it did not generally have a significant effect. Other outlets. Musicians and performers whose music is featured on television or other outlets may have increased sales as a result of that promotion. For example, the week that The Who performed during the 2010 Super Bowl halftime show, digital single sales of four featured songs increased between 223 percent and 329 percent; digital single sales increased for all four songs the week following the Super Bowl as well. As shown in figure 5 below, digital single sales of “Baba O’Riley” increased from fewer than 5,000 sales in the week before the Super Bowl to nearly 25,000 in the week following the event. Broadcast radio airplay for the four songs only increased 4.5 percent during the week of the performance and decreased during the week when sales peaked. In addition to television, according to one stakeholder, dance club DJs are also important for promoting music. A Grammy winning hip-hop performer stated that for his most recent music, club DJs promoted his sales more than broadcast radio. While industry stakeholders and practices indicate that the recording industry receives some promotional benefit from broadcast radio airplay, we are unable to quantify this benefit, in part because of the complex and changing nature of the relationship between the recording and broadcast radio industries. Broadcast radio remains the most common place to discover new music. However, this reliance is decreasing and younger audiences now rely primarily on the Internet to learn about new music. Thus, the Internet and other platforms, such as television, are contributing to the promotion of sound recordings. However, due to the complexities of the industries, it is not clear to what degree, if any, these other promotional outlets impact sales in conjunction with one another, in conjunction with broadcast radio airplay, or independently. Furthermore, the recording industry faces changes that make piracy much easier and more frequent, which stakeholders indicate contributes to decreasing sales. According to the Copyright Office, piracy reduces revenues that may have been generated by the promotional benefit of broadcast radio or one of the other platforms. The proposed act would result in both financial costs, in the form of royalty payments for the use of sound recordings, and administrative costs, in the form of potential reporting requirements. Although the total cost to the broadcast radio industry is unknown, if the 25 percent of radio stations with revenues at or above $1.25 million pay a royalty equal to 2.35 percent of their annual revenue, their payments would account for more than 90 percent of all royalty payments. According to broadcast industry stakeholders, these financial and administrative costs may lead some stations to make adjustments, such as discontinuing operations, reducing staff, or changing to nonmusic formats. Because of a lack of data, the impact of the proposed act on minority, female, and religious stations and the ability of various outlets (such as broadcast radio, satellite radio, and webcasters) to pay royalties is unclear. Under the proposed act, the statutory royalty paid by broadcast radio stations would vary according to the station’s gross annual revenues and status as commercial or noncommercial. As previously mentioned, as of November 2009, there were 14,441 licensed broadcast radio stations in operation, of which 10,076 are commercial and noncommercial radio stations that would pay a royalty under the proposed act because they have some music content (see table 6); the remaining 4,365 stations would not pay a royalty. The total royalties paid by the broadcast radio industry would vary, but radio stations with revenues greater than $1.25 million would pay the majority of the total royalty if the rate is set as a percentage of annual revenues. Royalty rates for commercial stations with revenues of $1.25 million or more would be negotiated or set by the copyright royalty judges after the enactment of the proposed act; therefore, we are unable to determine this rate. In previous decisions, the copyright royalty judges based the royalty for satellite and cable radio on annual revenues because no method exists to determine the size of the listening audience at any point in time; the same problem exists with broadcast radio. Therefore, if stations with revenues of $1.25 million or more pay a royalty rate based on a percentage of their annual revenue, each percentage point increase in the rate would cost the industry an additional $101 million in total royalties annually. We also calculated the potential annual payments using various rates considered in a previous Copyright Royalty Judges decision—2.35, 7.25, and 13 percent (see table 7). Total annual costs to the industry could range from $258 million to $1.3 billion based on these rates. Flat fee payments by commercial stations with annual revenue less than $1.25 million would generate approximately $19 million. Payments by noncommercial stations could range from $950,000 to $1.9 million, but due to the lack of data on the revenue of noncommercial stations, we could not determine the number of stations paying each noncommercial statutory license royalty and the overall royalty payments. If the rate is structured as a percentage of annual revenues, broadcast radio stations with annual revenues of $1.25 million or more would pay the majority of royalties, but payments for these radio stations would vary widely. For example, if these stations pay a rate equal to 2.35 percent of their annual revenue, their payments would account for more than 90 percent of all royalty payments and total over $237 million. However, as previously mentioned, these radio stations only represent 25 percent of all stations paying a royalty. Within this group of stations, the payments would vary significantly; some of these stations would pay less than $30,000 while other stations would pay over $1.5 million. In addition to making royalty payments, the proposed act would result in additional costs for broadcast radio stations in the form of reporting requirements. Radio stations that broadcast music would have to track and report each sound recording. While some radio stations have automated systems for this, representatives of commercial and noncommercial stations said that others cannot afford this technology or the additional staff to track and report sound recordings. Due to the burdens associated with the royalty and reporting requirements, stakeholders from the broadcast industry identified the following potential effects: Discontinued operation. Some stakeholders reported that broadcast radio station operators currently struggling to earn a profit may go out of business entirely. Experts with whom we spoke agreed that some marginal stations—those radio stations already facing financial difficulties—would likely discontinue operations. Although radio station licensees encountering financial difficulties can sell their stations, according to FCC, this may not be a feasible alternative for many. Due to the financial state of the broadcast industry, the values and sale prices of radio stations have declined, as has the availability of financing for the purchase of stations, making the option to sell less attractive to licensees. Alternatively, if a station returns its license to the commission, FCC officials said the process of selecting a licensee may be lengthy, possibly resulting in a temporary loss of service to the community. However, FCC officials also told us that the commission continues to receive a high volume of applications for licenses. Staff reductions. Broadcast radio stations might reduce staff, which represents the largest cost for many radio stations. While some radio stations have already reduced staff as a result of the declines in revenues, stakeholders indicated that other stations may be forced to lay off additional staff. Changing to nonmusic formats. According to broadcast radio stakeholders, broadcast radio stations might switch from a music format to a nonmusic format, such as talk or news, to avoid the additional costs of a royalty. However, the feasibility of switching from a music format to a nonmusic format would also be determined by market factors. For example, if there are many talk radio stations in a market, a station may not switch to talk radio because the market cannot support another station of that format. While switching to nonmusic formats may occur, among stations retaining a music format, a royalty should not cause stations to change the genre of music it plays or the variety of music because stations already make these decisions based on rating data and market research. Furthermore, the proposed royalty does not vary based on the genre or music played by a radio station. Minority, Female, and Religious Stations. Because of a lack of comprehensive data and several weaknesses that limit the usefulness of the data on the ownership of broadcast radio stations, we cannot determine the impact of the proposed act on minority, female, and religious broadcast radio station owners. FCC collects ownership information from radio station licensees; however, it lacks comprehensive data on the ethnicity, gender, and race of all radio station owners and it does not collect information necessary to identify religious owners. We previously reported on the weaknesses in the usefulness of FCC’s Form 323, which is the commission’s mechanism for collecting information on gender, race, and ethnicity of broadcasters. FCC has updated its Form 323 based on our recommendation, and intends to require all broadcast radio station owners to complete the revised form by July 2010. While we lack comprehensive data on the ethnicity, gender, and race of all radio station owners, we examined, on a limited basis, the impact that minority ownership and minority-targeted programming has on radio station revenues. We conducted a regression analysis of radio station revenues that controlled for stations’ membership in the National Association of Black Owned Broadcasters (NABOB). In particular, we regressed radio stations’ revenues on variables thought to influence revenues, including membership in NABOB. We found that NABOB-member stations’ revenues were no different than the revenues of all other stations. Thus, for this select group of stations, minority ownership does not appear to affect the stations’ revenues. We also conducted a regression analysis of radio station revenues that controlled for radio stations that target minority audiences. Again, we regressed radio stations’ revenues on variables thought to influence revenues, including formats that target minority audiences. We found that some radio stations with formats that target minority audiences—stations with ethnic and Spanish formats—have lower revenues compared with other stations. However, other stations that target minority audiences— stations with gospel formats—do not have revenues that differ significantly from other stations, and stations with urban formats have higher revenues compared to other stations. These results illustrate that in some instances, radio stations targeting minority audiences may have lower revenues than other stations but this is not consistent across all these types of stations. Ability of Various Outlets to Pay a Royalty. We are also unable to compare the ability of broadcast, satellite, and webcast radio stations to pay a royalty because of limited data. To assess the ability of these outlets to pay a royalty, we need revenue and cost data for these outlets, which are generally unavailable. The broadcast radio, satellite radio, and webcast industries generally have different sources of revenue and cost structures, which affect their ability to pay a royalty. For example, satellite radio derives its revenue through consumer subscriptions and some advertising, but must invest in satellite technology to provide service to its customers. Webcasters, on the other hand, derive revenue from both advertising and subscriptions and pay for bandwidth to distribute streaming content. As previously mentioned, commercial broadcast radio stations rely primarily on advertising for revenue, and broadcast radio stations’ costs include building or renting a tower for broadcasting. Other costs are similar across platforms, including personnel, facilities, and licensing for musical works. However, as previously mentioned, webcasters and satellite radio have the additional cost of the license for the sound recording, which the Copyright Royalty Judges established during rate-setting proceedings. The proposed act would result in additional revenue for the recording industry. However, we estimated that most featured performers and musicians would receive less than $100 per year from airplay in the top 10 markets. This new revenue could come from two sources: royalties paid by broadcast radio in the United States and royalties paid by broadcast radio in foreign countries. U.S. royalties. Several factors will influence the amount of royalty payments a copyright holder, musician, or performer receives. First, the royalty payment will depend on the individual’s or organization’s role in the creation of the sound recording. As mentioned previously, 50 percent of the revenue will be paid to the copyright holder, typically the record company; 45 percent will be paid to the featured musicians and performers; and the remaining 5 percent will be shared by the background musicians and performers. Second, the royalty payment will depend on the total amount of royalties paid by the broadcast radio industry. As we mentioned earlier, for stations with revenue of $1.25 million or more, the royalty rate will be determined through negotiation or by the copyright royalty judges; therefore, total royalties paid by the broadcast radio industry are unknown at this time. Finally the royalty payment will depend on the amount of airplay a sound recording receives. A sound recording that matches a genre with many broadcast radio stations, such as adult contemporary, may receive more airplay and, therefore, more royalties, compared to a sound recording that matches a genre with only a few radio stations, such as jazz. While these factors would affect the royalty earned by those in the record industry, the race or gender of the musician or performer would not be a factor affecting any earnings. We conducted an analysis to estimate the total annual royalties each sound recording would earn and determined that most sound recordings would earn less than $100 from airplay in the top 10 markets. To estimate these annual royalties, we used actual spins received during the first quarter of 2010 on 199 commercial broadcast radio stations in the top 10 DMAs; these commercial radio stations generate approximately 21 percent of the revenues for commercial radio stations with a music format nationwide. We then identified which of these radio stations would pay a flat fee and which would pay an undetermined rate. For those paying an undetermined rate, we calculated a royalty at 2.35 percent of the station’s annual revenues. As figure 6 shows, we found that 79 percent of sound recordings would receive a royalty of less than $1,000 annually. While approximately 21 percent of sound recordings would earn over $1,000, the sound recording with the most spins, “Bad Romance”, by Lady Gaga, would earn over $446,000. Using the data on royalties per sound recording, we also determined the total royalties featured musicians or performers could earn based on estimated airplay in 2010 in the top 10 DMAs. Many musicians and performers are the featured musicians for multiple sound recordings and, as table 8 shows, when combining their share of royalties for each of these sound recordings, we found that 56 percent would receive a royalty of less than $100 annually. Further, less than 6 percent of performers would receive over $10,000 or more annually in royalties for all sound recordings. The musician with the most royalties, Lady Gaga, generated almost $300,000 in annual royalties for 13 sound recordings that received over 46,000 total spins. While copyright holders are often a record company, we were unable to determine the aggregate share of royalties for each copyright holder as we could not group sound recordings with their copyright holder. We did determine that the four major record companies are affiliated with most sound recordings receiving royalties, but we were unable to determine if they hold the copyright for these sound recordings. We were also unable to identify background musicians and performers on these sound recordings to estimate their share of the royalty revenue. International royalties. Another possibility, if the proposed act were to pass, is that the recording industry may begin to receive royalties from broadcast radio in foreign countries. Currently, musicians and performers from foreign countries may receive a performance royalty when their music is broadcast over radio in other countries. Musicians and performers from the United States whose music is broadcast on foreign radio outlets typically do not receive these performance royalties because the United States does not have a reciprocal performance royalty. If passed, the proposed act could signal a change in U.S. policy, allowing U.S. musicians and performers to begin receiving royalties from foreign countries. However, existing trade agreements and foreign laws would influence these international royalties and it is unclear when U.S. musicians and performers would begin receiving these royalties. While it is also unclear how much musicians and performers would receive from international royalties, in 2007, the U.S. Copyright Office testified that the recording industry estimated the loss of about $70 million, and two stakeholders with whom we spoke indicated that the loss could exceed $100 million. Stakeholders and experts have differing views on whether the total revenue from U.S. and international royalties would affect the creation of music. As a $9 billion industry, the royalty payments to the recording industry previously estimated—$258 million to $1.3 billion—would represent a significant inflow of revenues. Stakeholders and the U.S. Copyright Office both indicated that this revenue could contribute to additional investments in music and help keep record companies operating. While some experts and stakeholders indicated the proposed act would primarily benefit established musicians and performers and would not impact new musicians, others indicated that it may be harder for new musicians to receive radio airplay. Others indicated this would lead to record companies working harder to promote their musicians to broadcast radio stations leading to more royalties for musicians signed to a record company. While views on the proposed act and its effects diverged, most stakeholders in the industry agreed that older artists who no longer benefit from performing live concerts would greatly benefit from any royalty. Further, stakeholders and background musicians and performers with whom we spoke also noted the importance of the royalties for them. We provided a draft of this report to FCC and the U.S. Copyright Office of the Library of Congress. FCC and the Copyright Office provided technical comments that we incorporated as appropriate. FCC’s and the Copyright Office’s written comments appear in appendices V and VI, respectively. In its letter, FCC noted that it has a substantial interest in any proposed legislation that might have an adverse impact on radio stations. FCC also suggested that we more clearly explain the nature and scope of the commission’s collection of ownership information from broadcast licensees, stating that it collects information on ethnicity, gender, and race. However, we found that FCC does not collect comprehensive information on the ethnicity, gender, and race of all radio station owners sufficient for our analysis. Therefore, we did not revise the report based on this suggestion. In its letter, the Copyright Office addressed certain methodological approaches and findings in our draft report. First, the Copyright Office suggested changes and additions to our analysis of digital singles sales and radio station revenues. In particular, the Copyright Office suggested discounting digital single sales attributable to music services other than radio, analyzing sales by age groups, and removing radio stations’ revenues attributable to certain nonmusic programming and services. Because we do not have transaction-level data necessary to identify how a digital single was purchased, who made the purchase, or why he or she purchased the digital single, we could not perform such analyses, but believe this would not have a material effect on our findings. Regarding radio station revenues, our work did not substantiate that removing radio stations’ revenues not associated with music programming would significantly affect our results because advertising associated with a station’s programming generates most of its revenue. Second, the Copyright Office also noted that tracking and reporting of sound recordings may not be a significant burden for radio stations because many stations might be exempt from this requirement and many other radio stations already track and report sound recordings. We assumed that most stations would have to track and report each sound recording played because other platforms that currently pay a royalty for the use of sound recordings track and report this information. Further, we do not believe that this assumption significantly affects our findings because most of the costs arising from the proposed act will be associated with the royalty payment and not the tracking and reporting of sound recordings. In addition, the Copyright Office noted that several analysts have reported that the broadcast radio industry’s revenues are increasing and that the royalty we estimated only represents a small fraction of the industry’s total revenues. We chose to include reported revenues, rather than rely on analysts’ forecasts, to ensure the reliability of our information. Finally, the Copyright Office noted that our finding that some performers would receive significantly higher royalties than other performers was not a surprise and represents that some performers are played on broadcast radio more than others and should, therefore, receive more royalties. The Copyright Office also noted that the small amount of royalty that many performers would receive should not discount the importance of the additional income for those performers and the recognition of the property right in the sound recording. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman, FCC; Register of Copyrights, Library of Congress; and interested congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VII. Our objectives were to address the following questions: (1) What are the benefits the broadcast radio and recording industries receive from their current relationship with each other? (2) What are the potential effects of the proposed Performance Rights Act on the broadcast radio industry? (3) What are the potential effects of the proposed Performance Rights Act on the recording industry? To assess the benefits the broadcast radio industry receives from its current relationship with the recording industry, we analyzed data from 2008 on broadcast radio revenues. Using the BIA Media Access Pro database, we determined the annual revenues of all commercial broadcast radio stations. Before conducting our analysis, we addressed certain features and limitations of the data to enhance the precision of our results. We identified commercial and noncommercial stations, their primary and secondary formats for each station, as well as “dark” stations not currently broadcasting. We classified commercial broadcast radio stations as either music or nonmusic based on the station’s format category, except for stations with religion or Spanish as their format categories. For stations with these format categories, we looked at the primary, secondary, and tertiary formats, a more granular level of analysis. If any of these three formats were a music content, then we considered the station a music station; otherwise, we identified the radio station as a nonmusic station. We did this in order to compare revenue for music versus nonmusic stations and to eventually determine the royalty rate each station would pay. Next, we imputed station revenue for sister stations that did not report revenue information. We accomplished this by identifying the sister stations that reported revenue and allocating the total reported revenue between that station and its nonreporting sister station. We also imputed the total revenues for nonreporting stations that were not sister stations, which accounted for approximately 40 percent of the stations. In order to do this, we ran a regression using the primary license coverage population, format category, license class, and whether it was an Arbitron market, as the explanatory variables. Based on this regression, we were able to develop predicted revenues for the nonreporting stations and scaled this to $4 billion, the unaccounted-for total revenues of the broadcast radio industry. Using the revenue data, we estimated the marginal effect of a station being a music or nonmusic station. To assess the benefits the recording industry receives from its current relationship with the broadcast radio industry, we conducted three analyses using information obtained from AC Nielsen’s SoundScan, Broadcast Data Systems (BDS), and Insight databases. First, using the SoundScan and BDS databases, we identified the quantity of digital singles of sound recordings sold for 12 sound recordings during the first quarter of 2010, and reported the total sales per spin. Before conducting our analysis, we addressed certain limitations of the data. We identified genres of music based on Nielsen’s “Core Genre” definitions. We identified the age of the music based on the date the sound recording was added to the SoundScan database. We compared the digital single sales to how often the sound recordings were played on broadcast radio and identified the sales per spin. To calculate digital single sales, we combined the sales of the three best-selling versions of each song. We did this because some songs have multiple versions. We limited this analysis to data in the top ten designated market areas (DMA). For our second analysis, we randomly selected six albums released between February 1 and February 14, 2010, and compared the national broadcast radio airplay received by the album to the national sales of those albums during a 15-week period. For our final analysis, we developed correlations and a regression model to analyze the relationship between weekly airplay and sales of sound recordings. We looked at the top songs receiving airplay in five categories of music, “Current Album,” “Current Country,” “Latin Overall,” “R&B Current-Overall,” and “New Artists.” We also looked at the sales of the albums associated with the top songs in these categories. We conducted a correlation analysis of the album sales and airplay to identify any relationship between airplay and sales. To further analyze any relationship between changes in airplay and sales, we developed a regression model. We regressesed weekly change in sales on present and past weekly changes in airplay, on past weekly changes in sales, on total airplay received by an album since its release, and on its total physical and digital sales since its release. We performed this analysis for each of the five categories of albums during an 8-week period determining any impact on changes in airplay during the initial weeks had against changes in sales during the final week. We also tested to see if cumulative airplay since the album’s release had any effect on sales for any of the 5 weeks. See appendix IV for additional information on these analyses. To assess potential effects of the proposed act on the broadcast radio industry, we used the revenue analysis described above and the previous analysis that classified broadcast radio stations as either music or nonmusic to calculate estimated costs for both commercial and noncommercial radio stations. Using these data, we calculated the number of commercial stations that would be required to pay each of the royalty levels. To illustrate potential royalty payments for commercial stations with annual revenues of $1.25 million or more, we calculated potential royalty payments using rates of 2.35, 7.25, and 13 percent of annual revenues, which are rates previously considered by copyright royalty judges in statutory rate setting proceedings for satellite digital audio radio services (SDARS). To determine the potential royalty payments for stations with revenues below $1.25 million that would be required to pay an annual flat royalty, we multiplied the number of stations in each rate category by the respective rate and summed these figures to arrive at a partial estimation of the cost to these broadcast radio stations. We calculated potential royalty payments for noncommercial stations by multiplying equal numbers of noncommercial stations by each of the respective rates for noncommercial stations described in H.R. 848; however, a lack of data on noncommercial stations’ revenues prevents us from determining the exact number of noncommercial stations paying each rate. To determine if revenue generated by minority-owned stations and stations that serve minority audiences differ from other broadcast radio stations’ revenue, we first identified stations in each of these categories. We identified black-owned stations by their owners’ membership in the National Association of Black Owned Broadcasters (NABOB). We classified the Ethnic, Spanish, Urban, and Gospel formats as targeting minority audiences based on data reported by Arbitron and other sources’ reporting on audience demographics. We then compared revenue for these music stations to revenue for nonmusic stations. To assess the potential effects of the proposed act on the recording industry, we conducted two analyses based on airplay during the first quarter of 2010 on 199 broadcast radio stations in the top 10 DMAs. We used the BDS database to identify all sound recordings that were played on these stations in the first quarter of 2010 and the total number of spins each sound recording received across all these sample stations. We then identified the number of spins on each broadcast radio station and the radio station’s 2008 revenues we had previously estimated. Based on the broadcast radio station’s 2008 revenues, we identified whether the radio station would pay a flat fee or had revenues above $1.25 million. If the station had revenues above $1.25 million, we estimated a royalty of 2.35 percent of total revenues. Based on each station’s estimated royalties, we divided the royalty amongst all sound recordings receiving airplay during 2010 based on the number of spins a sound recording received. This methodology mimics how Sound Exchange, the entity responsible for distributing digital performance royalties, distributes performance royalties for airplay over satellite radio. For our second analysis, we estimated the total royalty a featured musician or performer would receive from all sound recordings for which that individual or band are the featured musicians or performers. As in the previous analysis, we used airplay on all broadcast radio stations in the top 10 DMAs from first quarter of 2010. We totaled all estimated royalties from the previous analysis by featured musician or performer. To address all objectives, we spoke with relevant stakeholders from both the broadcast radio and recording industry, as well as government agencies. To identify relevant stakeholders from the recording industry, we constructed a judgmental sample that consisted of the four largest U.S. record companies, as well as independent record companies that varied with respect to the number of artists signed to each company, the genres of music produced, and the geographic location of each company. We also interviewed trade associations that represent the industry, such as the Recording Industry Association of America. We also interviewed performing rights organizations that distribute royalties for the musical work licensees and the digital performance of sound recording licensees. We interviewed industry experts and individuals that work in the industry, such as managers, accountants, lawyers, and union groups who represent musicians and performers, as well as musicians and performers. We also constructed a judgmental sample of stakeholders from the broadcast radio industry, including station owners and operators that varied with respect to station revenue, market size, geographic location, and genre. We interviewed broadcast industry experts and trade associations that represent the industry, such as the National Association of Broadcasters. Furthermore, we interviewed officials from the Federal Communications Commission’s (FCC) Media Bureau to understand FCC’s involvement in broadcast radio, including licensing, regulation, and oversight; to gain information about available data on broadcast station ownership; and to identify broadcast industry and other stakeholders to execute the engagement. We obtained relevant legislation and federal regulations that established FCC’s rules for broadcast radio and obtained FCC reports on broadcast license requirements and ownership. We also interviewed officials from the Library of Congress’ Copyright Office to understand its role in copyright matters, to gather information on laws relevant to the proposed act, to discuss Congress’ previous legislative activities involving music and copyrights, to review relevant copyright history, to identify stakeholders to execute the engagement, and to understand how the proposed act could affect the Library of Congress. We also spoke with a copyright royalty judge to understand the rate-making process. We gathered information on other industries that pay performance rights for the use of sound recordings, including digital and satellite radio and television, as well as information on how royalties are assessed and distributed in these industries. We reviewed independent and industry analyses of the value of sound recordings to radio and the value radio provides to sound recordings. We also reviewed previous congressional considerations of a performance royalty for broadcast radio in the United States and gathered information about the existence of performance royalties in countries outside the United States. We assessed the reliability of both the Nielsen and BIA data by (1) performing electronic testing of required data elements; (2) reviewing existing information about the data and the system that produced them; and (3) interviewing officials from both companies about measures taken to ensure the reliability of information. On the basis of our review, we determined that the data were sufficiently reliable for the purposes of our report. We conducted this performance audit from June 2009 through August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Senate version of the proposed Performance Rights Act would expand the public performance right of sound recordings for copyright holders in a manner similar to the House version; however, some differences exist between the two versions. While each version has similar thresholds and royalty levels for radio stations with annual revenues under $1.25 million, the Senate version has one additional threshold. In particular, the Senate version proposes a $100 annual flat rate, or flat fee, for commercial and noncommercial broadcast radio stations with revenues less than $50,000 (see table 9), while the House version does not include this threshold and royalty. The two versions also include other differing provisions, but those differences do not affect the royalty payments. The total royalties paid by the broadcast radio industry under S. 379 is unknown at this time. In table 10, we report the number of radio stations that would pay the different levels of royalties under the Senate version. Seventy-five percent of stations that would pay a royalty would pay an annual flat fee, ranging from $100 per year to $5,000 per year under the Senate version. Twenty-five percent of stations, those with revenue of $1.25 million or more, would pay a royalty based on a negotiated rate or a rate set by the copyright royalty judges. Because these royalties will be negotiated or determined subsequent to passage of the proposed act, we cannot determine the total cost to the radio industry at this time. In addition, due to the lack of data on the revenues of noncommercial stations, we can not determine the number of stations paying each noncommercial statutory license royalty. To provide estimates of the total costs to the broadcast radio industry under S. 379, we assumed that stations with revenues of $1.25 million or more would pay a royalty structured as a percentage of a station’s annual revenue. If stations with annual revenues of $1.25 million or more pay a royalty rate based on a percentage of their annual revenue, each percentage point increase in the rate would result in an additional $101 million in total royalty payments. We also calculated the potential annual payments using various rates considered in previous Copyright Royalty Board decisions—2.35, 7.25, and 13 percent (see table 11). Total annual costs for the industry could range from $257 million to $1.3 billion based on these rates. Annual flat fee payments by commercial stations with annual revenue less than $1.25 million would generate approximately $19 million and payments by noncommercial stations could range from $190,000 to $1.9 million. In our sample of six randomly selected albums released between February 1 and February 14, 2010, sales spiked immediately upon each album’s release and then decreased following the initial week of sales. For example, as shown in figure 8, Lil’ Wayne’s “Rebirth” album sold more than five times as many copies in the week it was released as were sold 1 month later. We found that album sales decreased substantially after their peak, irrespective of how many times the album’s songs were played on broadcast radio (i.e., how many “spins” all songs from the album received). For example, sales of Sade’s “Soldier of Love” album decreased by 62 percent during its second week of sales; however, broadcast radio airplay actually increased by 2 percent the same week. In the weeks following release, radio airplay varied widely from album to album, but did not follow the same trends as album sales, as shown in figures 7-12. This appendix describes the model we developed to analyze the relationship between airplay and sales of individual albums. Specifically, we discuss (1) the background and past economic literature, (2) our analytical framework, (3) the data we used in the analysis, (4) the estimation methodology and results, and (5) alternative regression specifications. The generally accepted hypothesis in the music industry is that radio airplay promotes music sales. Stakeholders from both the recording and broadcast radio industries agree that broadcast radio airplay can promote music sales. In fact, broadcast radio can be an important means by which many listeners discover new sound recordings; a 2010 study conducted by Arbitron found that 39 percent of survey respondents aged 12 years and older reported that they turned to radio first to learn about new music. Repeated airplay and the announcement of artists’ new albums before or after broadcasting sound recordings has been argued to increase album sales for the musicians. Further, the historical record of illegal payola activity shows that the recording industry has been willing to compensate the broadcast radio industry for airplay. In addition, record companies employ staff dedicated to the promotion of music to radio stations. The relationship between aggregate airplay and aggregate sales has been empirically analyzed in the past, and one author found that radio airplay substitutes for sales and, therefore, has a negative impact on sales while a second author found a positive relationship between airplay and sales. Liebowitz empirically investigated the impact of radio airplay on sales of sound recordings for a sample of American cities between 1998 and 2003. He acknowledges that radio airplay has the potential to promote sales in that songs receiving high airplay and new songs that listeners get an opportunity to experience can increase demand. However, he also argues that the time spent listening to radio becomes a substitute for time spend listening to albums. He estimated a regression model with record sales per capita as the dependent variable. He regressed this variable on the average time spent listening to music radio and other demographic variables such as income, Internet usage, age, and education which can influence record sales. He estimated his model using the first differences approach to control for underlying differences in populations and cities that are time invariant. He finds that radio airplay has a negative impact on sales of compact discs. Since the time spent listening to radio could represent time taken away from other activities, he also tests the impact of time spent listening to talk radio versus time spent listening to music radio on sales to see whether radio airplay actually substitutes for sales rather than just time spent listening. His results confirm his hypothesis that music radio is a direct substitute for sound recordings. Dertouzos, in a study sponsored by National Association of Broadcasters, conducted an empirical study to quantify the relationship between radio airplay and the sale of albums and digital tracks from 2004 to 2006 in the 99 largest designated market areas in the United States. In his model, he expressed logarithms of total sales as a function of music exposure, measured by the number of listeners multiplied by the number of “spins” or plays, of a sound recording and various other local market factors, and demographic and economic characteristics. He found the estimated impact of radio exposure to be positive and significant for all functional specifications that he used, implying that airplay leads to higher sales of albums. Our analytical framework differs from the previous research in that we tested to see if there is any relationship between sales and airplay for individual albums. As discussed above, the previous research attempts to measure the positive promotional effect or negative substitution effect of radio airplay on record sales and relied on aggregate airplay and sales data. In our analysis, we relied on the airplay and sales of individual albums of different music genres at the top of the charts. The lack of evidence of any relationship between airplay and sales in our analysis would not imply that a positive or a negative impact does not exist for any sound recording, but rather that it does not universally exist for each and every sound recording. For example, one may expect radio’s promotional effect to be much less for a song released 2 or 3 years ago or for some very popular current artists. In our analysis, it may be the case that for the particular albums we analyzed, which are already at the top of the charts and, therefore, enjoy a certain level of popularity, additional airplay does not affect their sales. To conduct our analysis, we acquired data from The Nielsen Company. In particular, we used airplay and sales data on the top songs receiving airplay for five categories of music—Current Album, Current Country, R&B, Latin, and New Artists. These categories are based on chart criteria in Nielsen’s SoundScan database, which tracks album sales, and are based around Album genres. We used data from six weekly reports from March 7, 2010, to April 11, 2010. Each report contained data for 3 weeks and contained information on the following elements: Physical and digital sales for the albums listed. Airplay data for the albums, where airplay for each song on an album is counted and the airplay for all the songs is aggregated to determine the total airplay for the album. The cumulative sales and airplay since the albums’ release dates. To examine the relationship between airplay and sales, we first conducted a correlation analysis. We simply looked at the degree of correlation between past, as well as present, values of airplay and sales across different categories of albums. A simple lack of correlation between airplay and sales would imply that the variables are not related to each other and, therefore, one variable does not affect another. However, high correlation between two variables and even between a variable and the lagged value of the variable expected to affect it, does not always imply a causal effect. For example, airplay and sales may be correlated simply because a popular song receives both high airplay as well as sales and one series may lag another without any apparent reason. Therefore, we next analyzed the degree of correlation between weekly changes in sales with both present and past weekly changes in airplay. Using our correlation analyses, we found the following: Sales and airplay are not correlated for any of the categories except Current Country. The degree of correlation between sales and airplay, among both present and lagged values of the variables, is about 60 percent for Current Country albums and less than 30 percent for all other categories of albums. The percentage change in sales and airplay are not correlated for any category except Latin. For albums in the Latin category, percentage changes in airplay in the past week are correlated with current percentage change in sales at around 60 percent. We also examined the relationship between airplay and sales using a regression model. We estimated a model in first differences in which we regressed the change in sales from week 2 to week 3 on a contemporaneous change in airplay (that is, from week 2 to week 3), on lagged changes in both sales and airplay (that is, from week 1 to week 2), total airplay received by an album since its release, and total physical and digital sales since release. We included the total airplay variable to see the effect of cumulative airplay on sales and total physical and digital sales variables to proxy for the quality of a particular album. Our regression equation is specified below: Change-in-sales = β + β*change in spins + β*change in spinst-1 + β*change in salest-1 + β*to-date-spins + β*to-date-sales + β*to-date digital-sales + ε where t is the week and t-1 is the prior week. We found that the change in airplay in the current and prior week did not have any effect on change in sales in the current week, except in the case of Latin albums where the relationship is positive and significant (see table 12). We tested several other specifications of the model and our results did not change. We ran a set of regressions with all categories of albums stacked together and another that included dummy variables for the different categories of albums and their interaction with other variables. We then performed regressions with the percentage of change in sales from week 4 to week 5 on the percentage of change in airplay from week 4 to week 5 as well as lagged weekly changes in both sales and airplay in the preceding month. We did this for two different models: separately for each category of album and a combined dataset with album category specific fixed effects and with dummy variables for formats and their interaction with other variables as additional regressors. Neither of these resulted in any notable findings different from the ones above. Lastly, we regressed sales in each of the 5 weeks on cumulative airplay, and digital and physical sales. We did not find cumulative airplay to have a significant and positive effect on sales. The following are GAO’s comments on the Federal Communications Commission letter dated July 21, 2010. GAO, Media Ownership: Economic Factors Influence the Number of Media Outlets in Local Markets, While Ownership by Minorities and Women Appears Limited and Is Difficult to Assess, GAO-08-383 (Washington, D.C.: Mar. 12, 2008). The following are GAO’s comments on the U.S. Copyright Office of the Library of Congress letter dated July 21, 2010. 1. We agree that some sales of digital singles may arise because consumers hear a single on a digital music service or a platform other than broadcast radio. However, to discount digital single sales that are specifically and directly attributable to other music services as the Copyright Office suggests would require transaction-level data that would identify whether the consumer reached an online retailer via a link from a digital music service or other platform. We do not have these data. Further, even if the consumer reached the online retailer via a link from a digital music service or other platform, the consumer might have originally heard the single on broadcast radio and, therefore, removal in this instance would be inappropriate. As we note in the report, it is not clear to what degree, if any, the various promotional outlets impact sales individually or in conjunction with one another. 2. To analyze sales of sound recordings by age groups would require transaction-level data that would identify the age of the consumer. We do not have these data. 3. We agree that the Copyright Royalty Judges will set the reporting requirements. However, we assumed that most stations will have to track and report each sound recording played because other platforms that currently pay a royalty for the use of sound recordings track and report this information. 4. Our data source included total gross revenues, including perhaps some revenues attributable to nonmusic programming and service, for radio stations and we, therefore, performed our analysis using this measure. We do not believe that removing radio stations’ revenues not associated with music programming would significantly affect our results because advertising associated with a station’s programming generates most of its revenue. In addition to the individual named above, Mike Clements, Assistant Director; Amy Abramowitz; Namita Bhatia-Sabharwal; Christine Hanson; Alison Hoenk; Eric Hudson; Bert Japikse; Susan Offutt; Jonathon Oldmixon; and Andrew Stavisky made key contributions to the report.
The recording and broadcast radio industries touch the lives of most Americans through the development and distribution of music. Congress is considering legislation, the proposed Performance Rights Act (H.R. 848), that would expand copyright protection for the public performance of sound recordings. The proposed act would require AM/FM radio stations that broadcast music to pay a royalty, and this royalty would be distributed to the copyright holder, performers, and musicians. This report addresses (1) the benefits received by the recording and broadcast radio industries from their current relationship, (2) the possible effects of the proposed act on the broadcast radio industry, and (3) the possible effects of the proposed act on the recording industry. To address these objectives, GAO analyzed data on music sales, broadcast radio airplay, and broadcast radio stations' revenues; calculated potential royalty payments; and interviewed stakeholders from both industries as well as experts and government officials. The Federal Communications Commission (FCC) and the U.S. Copyright Office of the Library of Congress reviewed a draft of this report. FCC noted that it has an interest in legislation that might have an adverse impact on radio stations. The Copyright Office addressed certain methodological approaches and findings in our draft report. Broadcast radio benefits from the use of sound recordings to generate advertising revenue and the recording industry may benefit from radio airplay that can promote sales. Radio stations use sound recordings to attract listeners and generate revenue from advertisers. GAO found that, on average, radio stations with a music format generate $225,000 more in annual revenues than nonmusic stations, such as talk or sports stations. Stations serving large populations receive more revenue from music content compared to stations serving a small population. Most industry stakeholders believe that radio airplay promotes sales for the recording industry, and past and current business practices support this conclusion. However, GAO found the relationship between airplay and music sales to be unclear. The presence of other promotional outlets, such as the Internet and special events, and growth of music piracy create a more nuanced environment wherein the relationship between airplay and music sales is less clear than in the past. The proposed act would result in additional costs for the broadcast radio industry. Under the proposed act, the royalty paid by a radio station would vary according to the station's gross annual revenues and status as commercial or noncommercial. Because the royalty paid by some radio stations would be negotiated or determined subsequent to passage of the proposed act, the total cost to the broadcast radio industry, including the costs to minority and female radio station owners, cannot be determined at this time. If broadcast radio stations with revenues of $1.25 million or more pay a royalty based on a percentage of station revenues, every 1 percentage point would cost the broadcast radio industry $101 million per year. For example, a 2.35 percent rate paid by these stations would entail total annual costs to the radio industry of over $258 million. GAO also estimated that with a 2.35 percent rate, the 25 percent of stations with revenues of $1.25 million or more would pay over 90 percent of the total royalties. According to broadcast industry stakeholders, these costs could lead some stations to reduce staff, switch to a nonmusic format, or discontinue operations. The proposed act would result in additional revenue for recording industry stakeholders. Several factors would influence the revenues a stakeholder receives, including the total royalty payments, the stakeholder's role (copyright holder, performer, or musician), and the amount of airplay the stakeholder's music receives. Since the total royalty payments cannot be determined at this time, the additional revenue for recording industry stakeholders is also unknown. However, assuming a 2.35 percent royalty rate, GAO estimated that 56 percent of performers would receive $100 or less per year, and fewer than 6 percent of performers would receive $10,000 or more per year in royalties from airplay in the top 10 markets; music radio stations in these markets generate about 21 percent of industry revenues. Some experts and the Copyright Office believe that the additional revenue would promote investment in music and greater employment, although this opinion is not universally held.
Federal crop insurance protects participating farmers against crop losses caused by perils such as droughts, floods, hurricanes, and other natural disasters. The federal program—which began on an experimental basis in 1938 after private insurance companies were unable to establish a financially viable crop insurance business—was restructured and greatly expanded by key legislation in 1980 and 1994. A major component of the 1980 legislation was the enlistment, for the first time, of private insurance companies to sell, service, and share the risk on federal crop insurance policies. In 1994, the Congress further broadened the program by offering farmers catastrophic risk insurance. This coverage, established at a minimum level, incorporated elements of the former crop disaster assistance program into crop insurance provided jointly by the U.S. Department of Agriculture (USDA) and private insurance companies. USDA’s Risk Management Agency administers the federal crop insurance program through the Federal Crop Insurance Corporation (FCIC). FCIC pays the participating companies a fee, called an administrative expense reimbursement, that is intended to reimburse the companies for the reasonable expenses associated with selling and servicing crop insurance to farmers. The reimbursement is calculated as a percentage of the premiums paid, regardless of the expenses incurred by the companies. In addition to this reimbursement, participating insurance companies share with FCIC any gains or losses—known as underwriting gains and underwriting losses—that result from the insurance policies they sell. In 1994, 22 participating insurance companies received $395 million from the program—about $292 million in administrative expense reimbursements plus about $103 million in underwriting gains. In 1995, 19 participating insurance companies received $506 million from the program—about $373 million in administrative expense reimbursements plus about $133 million in underwriting gains. Expense reimbursements and underwriting gains varied by company according to the amount of premiums written, the amount of risk retained, and the management of the risk retained. Federal crop insurance offers farmers two primary types of insurance coverage—catastrophic and buyup. Both types of coverage are available for most major crops under the changes made by the Congress in the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354, Oct. 13, 1994, title I). This act created catastrophic risk insurance as a replacement for expensive crop disaster assistance. Catastrophic insurance provides farmers with protection against extreme crop losses for a small processing fee. Buyup insurance provides protection against more typical and smaller crop losses in exchange for a farmer-paid premium. Participating insurance companies offer both types of insurance, whereas USDA’s Farm Service Agency (FSA), through its local offices, offers only catastrophic insurance. Under the terms of a negotiated agreement, participating insurance companies sell crop insurance and process any claims in exchange for an administrative expense reimbursement and for the opportunity to share in the potential for underwriting gains. The government pays the total premium for catastrophic insurance and a portion of the premium for buyup insurance. FCIC establishes the premiums, terms and conditions for both types of insurance. Under the 1994 reform act, farmers who had not previously purchased crop insurance were required to purchase at least catastrophic insurance coverage if they signed up for USDA’s annual commodity programs; obtained USDA’s farm ownership, operating, or emergency loans; or contracted to place land in the Conservation Reserve Program. Subsequently, the Federal Agriculture Improvement and Reform Act of 1996 (P.L. 104-127, Apr. 4, 1996) eliminated this mandatory linkage by permitting farmers, effective for crops harvested in 1996, to forgo crop insurance for any given crop without losing eligibility for other programs, provided they waive all rights to any possible crop disaster assistance in connection with the particular crop. Catastrophic insurance, which protects farmers against extreme losses, is often referred to as minimum coverage because it provides protection at the lowest production and price levels offered. Catastrophic insurance pays farmers only when they experience production losses greater than 50 percent of their normal crop. A normal crop is determined on the basis of a farmer’s production history as reported to USDA’s local office or to the insurance agent. For production losses greater than 50 percent, farmers are paid 60 percent of FCIC’s projected market price for the crop. Farmers desiring protection above the minimum price or production levels provided by catastrophic insurance can purchase buyup insurance. Unlike farmers who purchase catastrophic insurance, farmers purchasing buyup insurance must choose both the coverage level (the proportion of the crop to be insured) and the unit price (e.g., per bushel) at which any loss is calculated. With respect to the coverage level, farmers can choose to insure as much as 75 percent of normal production (25-percent deductible) or as little as 50 percent of normal production (50-percent deductible) at different price levels. With respect to unit price, farmers choose whether to value their insured production at FCIC’s full projected market price or at a percentage of the full price. FCIC adjusts farmers’ premiums according to the production and price levels selected. The following example illustrates how a claim payment is determined under catastrophic insurance, which insures 50 percent of production and 60 percent of the price. A farmer whose normal crop production averages 100 bushels of corn per acre and who chooses catastrophic insurance will be guaranteed 50 percent of 100 bushels, or 50 bushels per acre. Assuming that FCIC had estimated the market price for corn at $3 per bushel, the farmer will be guaranteed a price of 60 percent of $3, or $1.80 per bushel. The farmer’s total coverage per acre will be $90 (50 bushels x $1.80 per bushel). This total amount will be paid in the event of a complete crop failure. Should an event like drought cut the farmer’s actual harvest from 100 to 60 bushels, the farmer will not receive a payment because, in this example, catastrophic insurance only pays if the yield drops below 50 bushels per acre. If a more severe problem caused the yield to fall to 25 bushels per acre, the farmer will be paid for the loss of 25 bushels per acre—the difference between the insured production level of 50 bushels and the actual production of 25 bushels. In this case, catastrophic insurance will pay the farmer’s claim at $1.80 x 25 bushels, or $45 per acre. If this same farmer chooses buyup insurance at the 75-percent coverage level, the farmer will be guaranteed 75 percent of 100 bushels, or 75 bushels per acre. Assuming that the farmer had chosen the maximum price coverage of 100 percent, and that FCIC had estimated the market price for corn at $3 per bushel, the farmer’s price coverage will be $3 per bushel. Accordingly, the farmer will have coverage in the event of a total crop loss of $225 per acre (75 bushels x $3 per bushel). Should drought or other perils cut the farmer’s actual harvest to 60 bushels, the farmer will be paid for the loss of 15 bushels per acre—the difference between the insured production level of 75 bushels and the actual production of 60 bushels. In this case, buyup insurance will pay the farmer’s claim at $3 x 15 bushels, or $45 per acre. In the event of a more severe loss that reduced production to a level of 25 bushels per acre, the farmer will be paid for the loss of 50 bushels per acre—the difference between the insured production level of 75 bushels and the actual production of 25 bushels. In this case, buyup insurance will pay the farmer’s claim at $3 x 50 bushels, or $150 per acre. According to a written agreement between FCIC and participating insurance companies—called the standard reinsurance agreement—FCIC pays the participating companies a uniform reimbursement for administrative expenses at a preestablished percentage of total premiums to deliver—sell and service—catastrophic and buyup insurance. This base rate can be, and has been, supplemented to provide additional funding in years when administrative costs were high because of excess losses or when other factors require the companies to conduct additional work. Beginning in 1994, as part of the agreement, FCIC required each participating company to report its delivery expenses to FCIC for the prior year to help determine the long-term adequacy of the reimbursement rate. In addition to providing an administrative expense reimbursement, this agreement governs the participating companies’ share of any underwriting gains or losses resulting from the policies they sell. FCIC does not directly reimburse the participating companies for their actual costs of administering the program. Instead, FCIC pays all participating companies a uniform administrative expense reimbursement at a preestablished percentage of total premiums (including the farmer-paid premium, government premium subsidy for buyup insurance, and the imputed premium for catastrophic insurance). FCIC pays participating companies an administrative expense reimbursement that is intended to reimburse them for the expenses that can be reasonably associated with the sale and service of federal crop insurance, including the expenses associated with adjusting claims. Because the reimbursement is not tied to specific expenses, the companies are not obligated to spend the payment they receive on selling or servicing crop insurance policies; the payments can be used in any way the companies choose. Since 1980, in fact, the reimbursement rate has evolved as a result of negotiations between FCIC and the participating companies and budget concerns and has not been based on a systematic evaluation of companies’ expenses. For buyup insurance, the administrative expense reimbursement base rate under the standard reinsurance agreement has declined from a high of 34 percent of total premiums between 1988 and 1991 to 31 percent between 1994 and 1996. In 1995, the administrative expense reimbursement for buyup insurance totaled 32.6 percent of buyup premiums. This reimbursement rate included a base administrative expense reimbursement of 31.0 percent of premiums and a supplemental reimbursement of 1.6 percent of premiums associated with extra adjustments for crop losses in 1995. The 1994 reform act requires FCIC to limit the reimbursement rate for selling and servicing buyup insurance to no more than 29 percent of total premiums in 1997, no more than 28 percent in 1998, and no more than 27.5 percent by 1999. While this reduction in reimbursement rate was mandated by the act, the established rates were not based on a systematic evaluation of the costs associated with selling and servicing crop insurance. For catastrophic insurance, companies were paid a lower base reimbursement rate—13.8 percent of the imputed premiums—for delivering catastrophic insurance and were allowed to keep most of the $50 processing fee paid by farmers. In 1995, compensation for catastrophic insurance totaled 24.0 percent of catastrophic premiums, including (1) a base administrative expense reimbursement of 13.8 percent of premiums;(2) a retained farmer-paid processing fee of $50, equating to 9.3 percent of premiums; and (3) a supplemental reimbursement of 0.9 percent of premiums associated with extra adjustments for crop losses in 1995. Beginning in 1994, FCIC began to require companies to submit a detailed expense report in a consistent format following standard industry guidelines for the prior calendar year—1993. However, not all companies complied with the new requirement until 1995 when they reported 1994 expense data. This expense reporting has to comply with a number of guidelines, such as those that the National Association of Insurance Commissioners issues on allocating expenses among lines of business. These expense reports do not directly affect the amount paid to the companies but rather provide support and serve as an indicator for establishing future reimbursement rates for administrative expenses. Included in the expenses reported are loss adjustment costs, sales commissions paid to local insurance agents, and the general administrative expenses associated with operating the insurance companies, such as payroll, equipment, travel, training, and rent. Currently, FCIC is developing a new standard reinsurance agreement, including new expense reimbursement rates, that will be completed with the participating companies in June 1997. In addition to receiving an administrative expense reimbursement, the participating companies share any underwriting gains or losses with FCIC that result from the policies the companies sell. Underwriting gains occur if the premiums exceed the claims paid on the policies. In the same manner, underwriting losses occur when the claims paid exceed the premiums. The participating companies are able to vary the extent to which they share in the risk. In general, the companies choose to retain more of the risk on the historically lower-loss producers and share more of the risk with FCIC for those producers who have a history of more frequent or more severe loss experience. In addition, to protect participating companies against high underwriting losses in years with extreme crop losses, FCIC limits the total loss that participating companies must share. The number of companies selling and servicing crop insurance for FCIC has decreased from 27 in 1990 to 16 in 1996 because of business acquisitions and changing business relations. Insurance premiums written by participating companies during this same period increased from $747 million in 1990 to $1.6 billion in 1996. As shown in table 1.1, FCIC paid participating companies significantly larger administrative expense reimbursements than the companies earned in underwriting gains between 1990 and 1996. This reflects the fact that the reimbursement is a fixed fee based on premiums written, whereas the underwriting gain varies depending on crop loss experiences. Between 1994 and 1995, federal crop insurance sales increased from $918 million to over $1.5 billion. In 1995, catastrophic insurance accounted for $456 million in premiums, and buyup insurance accounted for an additional $1.1 billion in premiums. Before catastrophic insurance was available, the program had been generating average premiums of about $700 million a year. As shown in table 1.2, participating companies sold a larger portion of federal crop insurance than USDA. In 1996, federal catastrophic crop insurance sales decreased slightly to $424 million, while federal buyup insurance increased to almost $1.4 billion. Under the expanded federal crop insurance program created by the 1994 reform act, program costs increased from over $700 million in the early 1990s to about $1.6 billion in 1996. As shown in table 1.3, federal crop insurance costs paid by the government totaled about $7.2 billion from 1990 through 1996 and were made up of claims paid in excess of premiums ($1.6 billion), premium subsidy ($2.8 billion), administrative expense reimbursements ($2.2 billion), and other administrative costs ($611 million). Concerned about the cost-effective delivery of federal crop insurance and recognizing the important role the private insurance industry plays in delivering federal crop insurance, the Congress, in section 118 of the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994, directed GAO and FCIC to jointly evaluate the financial arrangements between FCIC and participating insurance companies for delivering the crop insurance program to qualified producers and to address several specific issues. Separately, USDA’s Risk Management Agency will report on the adequacy of return on capital to insurance companies and alternative reinsurance arrangements between the government and the companies. Our review focused on the following two issues: The adequacy and reasonableness of the current administrative reimbursement rate for expenses of participating companies; and The cost to the government of private-sector delivery compared with USDA delivery of catastrophic insurance. As required by the act, we also reviewed and reported on (1) the advantages and disadvantages of alternatives to the current arrangement for reimbursing administrative expenses, and (2) FCIC’s actions to simplify procedural and administrative requirements. The results of our work for these two topics are reported in chapter 4 and appendix I, respectively. To assess the adequacy of the current reimbursement rate for administrative expenses, we compared participating companies’ reported expenses for selling and servicing buyup insurance with reimbursements they received from FCIC for 1994 and 1995. Not all participating companies reported these expenses to FCIC in a consistent format until 1994; furthermore, 1996 expenses for selling and servicing crop insurance were not complete at the time of our review. We assessed expense data for crop insurance at nine participating companies that represented about 80 and 85 percent of the crop insurance premiums in 1994 and 1995, respectively. To gain an understanding of crop insurance expenditures, we interviewed representatives from participating companies and obtained an explanation of all reported expenses. In addition, to evaluate the reasonableness of reported expenses, we used as guidance FCIC’s listing of allowable expenses, the National Association of Insurance Commissioners’ guidelines, generally accepted accounting principles, federal acquisition regulations, and the Internal Revenue Code. Within the framework of these standards and guidelines, we made judgments about what we considered to be reasonably associated expenses for selling, processing, and adjusting crop insurance policies for the federal government and discussed these judgments with the FCIC officials responsible for administering the program. Generally, we considered as reasonable those expenses associated with (1) interacting with farmers, (2) reviewing insured property, (3) processing policy and claims paperwork, and (4) related overhead and indirect costs, including the training and travel of staff. As part of our review, we examined participating companies’ complete list of reported expenses. For a judgmental sample of these reported expenses, we traced the expenses to source documents. Our results reflect only the findings at the companies we reviewed and do not necessarily reflect the conditions for other companies selling federal crop insurance. We did not specifically validate companies’ accounting systems, but we did review each company’s audited financial statements to ensure ourselves that the financial data provided were reasonable. Appendix II provides a list of the participating companies we visited. To examine the cost differences to the government between USDA and private-sector delivery of catastrophic insurance, we analyzed the government’s costs to use participating companies in comparison with the costs of using USDA. To perform our analysis, we obtained 1995 data on the costs to deliver catastrophic insurance through USDA, including costs for USDA’s headquarters in Washington D.C.; its main field offices in Kansas City, Missouri; and its state, regional service, district, and local offices. We reduced the costs for USDA’s delivery system by the amount of processing fees the Department collected from farmers for catastrophic insurance. We made the reduction because USDA uses these fees to reduce other government expenditures. To identify the government’s costs to use participating companies to deliver catastrophic insurance, we obtained data from FCIC on administrative expense reimbursements as well as underwriting gains paid to companies that participated in the catastrophic insurance program in 1995. To identify alternative methods for expense reimbursements, we interviewed officials of selected participating companies, trade associations, and USDA. We then narrowed the compilation down to four distinct alternatives and analyzed them against the 1995 crop insurance experience, where reasonable, to measure their impact as if they had been in place for that year. We also determined qualitative factors associated with each of the methods through discussions with industry and FCIC officials. To determine the status of procedural and administrative simplification, we reviewed FCIC’s summary of completed and in-progress simplification and paperwork reduction actions; and we reviewed potential simplification actions proposed by FCIC and by representatives of the crop insurance industry. We discussed the potential cost and benefit of these proposed actions with crop insurance company and FCIC officials. The information we developed is presented in appendix I. We conducted our review from March 1996 through March 1997 in accordance with generally accepted government auditing standards. Although we did not independently assess the accuracy and reliability of USDA’s computerized databases, we used the same files USDA uses to manage the crop insurance program and its local county offices. In December 1996, we provided USDA officials and representatives of National Crop Insurance Services, Inc., the American Association of Crop Insurers, the Crop Insurance Research Bureau, Inc., and several individual companies with a detailed briefing on the results of our review. In March 1997, we provided a copy of our draft report to USDA and to the crop insurance industry organizations for their review and comment. The Department’s and industry’s comments are addressed at the end of each chapter. In addition, the industry’s written comments are reproduced in appendixes VIII and IX. USDA’s Risk Management Agency found no fault with our methodology. However, the industry associations that received copies of our draft report stated that our review did not fully respond to the Congress’ mandate in the 1994 reform act because we focused on delivery costs and did not address other requirements of the act. We focused on delivery costs because, in researching the legislative history of this mandate, we found that in the context of funding this program in a deficit reduction environment, the paramount congressional interest was in controlling the costs of reimbursing crop insurers. Furthermore, we confirmed our interpretation of the mandate in a commitment letter sent to the Chairmen and Ranking Minority Members of the Senate Committee on Agriculture, Nutrition, and Forestry and the House Committee on Agriculture. This letter set forth our approach for meeting this mandate including our scope and methodology. Consequently, we focused on costs incurred by insurers that are reimbursed by the government in order to provide the information most useful to congressional decisionmakers. Therefore, we believe that the report fulfills the Congress’ mandate. In 1994 and 1995, FCIC’s reimbursement payments to the nine participating companies in our review were higher than the expenses that can be reasonably associated with the sale and service of federal crop insurance. For the 2-year period, the companies we reviewed reported $542.3 million in expenses, compared with a reimbursement of $580.2 million—a difference of about $38 million. In addition, our review of the companies’ reported expenses showed that about $43 million did not appear to be reasonably associated with the sale and service of federal crop insurance to farmers and thus, should not be considered in determining future administrative reimbursement rates. These expenses included payments to compensate company executives for refraining from joining or starting competing companies, fees paid to other insurance companies to protect against underwriting loss, bonuses tied to company profitability, management fees paid to parent companies with no identifiable benefit to subsidiary crop insurance companies, and lobbying expenses. We further identified a number of expenses reported by the companies that, while in categories that can be reasonably associated with the sale and service of crop insurance, seemed to be excessive for a taxpayer-supported program. These expenses included above-average commissions paid to agents by one large company, corporate aircraft and excessive automobile charges, country club memberships, and various entertainment activities for agents and employees, such as stadium sky box rentals at professional sporting events and company-sponsored fishing trips. Although nothing in the current agreement between FCIC and the insurance companies precludes the companies from spending on these items, we believe that these types of expenses suggest that opportunities exist for the government to reduce its future reimbursement rate. Furthermore, a variety of emerging factors, including higher crop prices and higher premium rates in 1996 and 1997, and program simplification, have increased companies’ revenues or may decrease companies’ expenses. For 1994 and 1995, companies collectively reported expenses that were less than the administrative expense reimbursement they received from FCIC. For 1994, the reimbursement was equal to the expenses reported, and for 1995, reported expenses were about $38 million less than the reimbursement. After examining the companies’ expense reports, however, we determined that a number of the reported expenses did not appear to be reasonably associated with the sale and service of crop insurance to farmers and thus, should not be considered in determining an appropriate future reimbursement rate for administrative expenses. After adjusting the expense reports to delete these items, we found that the expenses reasonably associated with crop insurance delivery were about $43 million less than those reported. In total for 1994 and 1995, the nine companies we reviewed reported expenses for buyup and catastrophic crop insurance sales and service that were somewhat less than the administrative expense reimbursement FCIC paid them. FCIC administrative expense reimbursements paid to participating companies in 1994 and 1995 were 31 and 31.4 percent of total premiums, respectively. This represented $236.5 million in 1994 and $343.6 million in 1995. For these same years, the companies reported expenses of 31 percent, or $236.8 million, and 27.9 percent, or $305.5 million, respectively. Collectively, reported expenses were $38 million less than the reimbursements the companies received. As shown in figure 2.1, the largest component of the expenses reported by the companies went to pay sales commissions to local insurance agents. The average commission reported for 1995 was less than in 1994—14.5 percent of total premiums compared with 17.2 percent of total premiums in 1994. The 1995 average commission was lower because in that year companies combined catastrophic expenses, which have lower agent commissions, with buyup expenses. With respect to loss-adjusting expenses, although insurance claims were higher in 1995 than in 1994, the company reports that showed average loss-adjusting expenses as a percent of premium actually dropped slightly in 1995. Our review of the nine companies’ reported expenses showed that about $43 million did not appear to be reasonably associated with the sale and service of federal crop insurance to farmers and thus should not be considered in determining an appropriate future reimbursement rate for administrative expenses. Expenses reported by the companies that did not appear to contribute to the sale and service of crop insurance were expenses related to acquiring competitors’ businesses, protecting companies from underwriting losses, sharing profits through bonuses or management fees, reporting errors and omissions. Each of these types of expenses is discussed below. Among the reported costs that did not appear to be reasonably associated with the sale and service of crop insurance to farmers were those related to costs the companies incurred when they acquired competitors’ business. These costs potentially aided the companies in vying for market share and meant that one larger company, rather than several smaller companies, was delivering crop insurance to farmers. However, this consolidation was not required for the sale and service of crop insurance to farmers, provided no net value to the crop insurance program, and according to FCIC, was not an expense that FCIC expected its reimbursement to cover. We identified costs in this general category totaling $12 million—$8.3 million in 1994 and $3.7 million in 1995. For example, one company took over the business of a competing company under a lease arrangement. The lease payment totaled $3 million in both 1994 and 1995. About $400,000 of this payment could be attributed to actual physical assets the company was leasing and we recognized this amount as a reasonable expense. However, the remaining $2.6 million—which the company was paying each year for access to the former competitor’s policyholder base—provided no benefit to the farmer and added no net value to the program. Likewise, we saw no apparent benefit to the crop insurance program from the $1.5 million the company paid executives of the acquired company over the 2-year period as compensation for not competing in the industry. In a related instance, the company reported a $3.9 million expense to write down the value of an acquired company because of liabilities identified after acquiring that company’s business. These liabilities arose from crop insurance claims in dispute, crop insurance claims paid in error, premium adjustments, legal actions, and bad debts relating to the acquired company’s operations in prior years. This expense reflected a cost that the company incurred to increase its market share and provided no net benefit to the program. Although FCIC did not explicitly refer to this type of expense in its last standard reinsurance agreement with companies, we discussed this type of expense with FCIC. It agreed that this expense cannot be reasonably associated with the sale and service of crop insurance and thus should not be considered in determining a future reimbursement rate for administrative expenses. We also found that two companies included payments to commercial reinsurers among their reported delivery expenses for crop insurance. These are payments the companies made to other insurance companies to expand their protection against potential underwriting losses. This commercial reinsurance allows companies to expand the amount of insurance they are permitted to sell under insurance regulations while limiting their underwriting losses. The cost of reinsurance relates to companies’ decisions to manage underwriting risks rather than to the sale and service of crop insurance to farmers. Although FCIC did not explicitly refer to this type of expense in its last standard reinsurance agreement with companies, we discussed this type of expense with FCIC. It agreed that this expense should be paid from company underwriting results and thus should not be considered in determining a future reimbursement rate for administrative expenses. For the two companies that reported reinsurance costs as an administrative expense, these expenses totaled $10.7 million over the 2 years—$5.4 million in 1994 and $5.3 million in 1995. Among their reported administrative expenses for crop insurance, some companies included expenses resulting from decisions to share profits with (1) company executives and employees through bonuses or (2) parent companies through management fees. We found that expenditures in this general category totaled $12.2 million—$5 million in 1994 and $7.2 million in 1995. We do not believe that bonuses associated with profit sharing are appropriate for inclusion in a long-term reimbursement rate. In contrast, we believe that bonuses given to recognize employee performance, as well as bonuses paid to agents, are reasonable expenses associated with the sale and service of crop insurance, and we included them as reasonable expenses. Profit-sharing bonuses—bonuses linked to overall company profitability for each year—were a significant component of total salary expenses at one company, equaling 49 percent of basic salaries in 1994 and 63 percent in 1995, and totaling $9 million for the 2 years. Total employee salaries at this company, as a percent of premium, were somewhat less than at other companies. However, when the profit-sharing bonuses—paid out of profits after all necessary program expenses were paid—were added to salaries, overall employee salaries at this company were 35-percent higher than the nine-company average. While company profit sharing may benefit a company in competing with another company for employees, the profit-sharing bonuses, which in this particular case seemed excessive, do not contribute to the overall sale and service of crop insurance or serve to enhance program objectives. Additionally, we identified profit-sharing bonuses totaling $2.1 million reported as expenses at three other companies for 1994 and 1995. FCIC agrees that this type of expense goes beyond the reasonable expenses associated with the sale and service of crop insurance. Similarly, we noted that two companies reported expenditures for management fees paid to parent companies as administrative expenses for crop insurance. Company representatives provided few examples of tangible benefits received in return for their payment of the management fee. We recognized management fees as a reasonable program expense to the extent that companies could identify tangible benefits received from parent companies. Otherwise, we considered payment of management fees to be a method of sharing income with the parent company and paid in the form of a before-profit expense item rather than as a dividend. These expenses totaled $1.1 million for 1994 and 1995. Although FCIC did not explicitly refer to these types of expenses in its last standard reinsurance agreement with companies, we discussed these expenses with FCIC. It agreed that to the extent the expenses exceed tangible benefits to the companies, they cannot be reasonably associated with the sale and service of crop insurance and thus should not be considered in determining an appropriate future reimbursement rate for administrative expenses. FCIC’s standard reinsurance agreement with the companies precludes them from reporting expenditures for lobbying as crop insurance delivery expenses. Despite this prohibition, we found in our sample of company transactions that the companies included a total of $418,400 for lobbying and related expenses in their expense reporting for 1994 and 1995. The vast majority of these expenses involved lobbying by crop insurance trade associations. Each company in our review paid membership fees to one or more crop insurance trade associations. Lobbying is one of the services provided to the companies by these associations. In accordance with Internal Revenue Service’s rules, each industry trade association provided information to its members on the extent to which the payments to the association were used to fund lobbying activities. Nevertheless, none of the companies in our review excluded these expenses from their expense reports. We also identified a number of errors and/or omissions in the companies’ expense reporting. In 1994, the net effect of these errors and omissions was to reduce total company expenses by $8.4 million, whereas in 1995, the net effect was to increase total company expenses by $0.6 million. In our review of companies’ reported expenses, we identified various errors and/or omissions including expenses reported in the wrong year, expenses reported twice, and expenses not reported at all. Also, we found that five companies erred in reporting a total of $1.8 million in state income taxes as an expense of selling and servicing crop insurance in 1994 and 1995. State income taxes are the result of successful crop insurance delivery and are not an administrative expense associated with the sale and service of crop insurance to farmers, whether the taxes are based on underwriting gains or on profits made from the delivery itself. To the extent that the taxes are based on profits from the delivery, they are not associated with the sale and service of crop insurance because, according to FCIC, companies are expected to earn profits from underwriting—not from administrative reimbursements. To the extent that the taxes are based on underwriting gains, they should not be recognized as an expense of delivering crop insurance. Collectively, as shown in table 2.1, for the nine companies we reviewed, we found that the expenses reasonably associated with the sale and service of buyup and catastrophic crop insurance combined were 27.5 percent of total premiums for 1994 and 26.4 percent for 1995. These rates are considerably lower than the 31 percent and 31.4 percent of total premiums paid by FCIC to reimburse the companies for these sales in those years. In total for 1994 and 1995, FCIC reimbursements exceeded delivery expenses by $81 million. FCIC reviewed and agreed with our analysis and treatment of these expenses. Appendix III provides a complete listing of those expenses that do not appear to be reasonably associated with the sale and service of federal crop insurance and should not be considered in determining an appropriate future administrative expense reimbursement. Appendix III also includes our rationale for expense adjustments. Appendix IV shows the expenses for selling and servicing federal crop insurance as reported by the nine companies in our review and our presentation of the expenses reasonably associated with the sale and service of federal crop insurance. In addition, for 1995, appendix IV shows adjusted expenses as they relate to buyup and to catastrophic insurance. As shown in the appendix, for 1995, companies’ adjusted expenses related to buyup insurance were 27.1 percent of premiums and expenses related to catastrophic insurance were 22.2 percent of premiums. In comparison, in 1995, companies received an administrative expense reimbursement for buyup insurance of 32.6 percent of buyup premiums and compensation for catastrophic insurance of 24 percent of premiums. We also found a number of expenses reported by the nine companies that, while in categories associated with the sale and service of crop insurance, seemed to be excessive in nature for a taxpayer-supported program and offer opportunities for FCIC to reduce future reimbursement rates. Collectively, controlling these expenses should reduce the average cost of selling and servicing crop insurance policies. These expenses included above-average commissions to agents on buyup policies; travel expenses, such as corporate aircraft and excessive automobile charges; and entertainment expenses, such as country club memberships and stadium sky box rentals. Each of these types of expenses is discussed below. In the crop insurance business, participating companies compete with each other for market share through the sales commissions paid to independent insurance agents. To this end, companies offer higher commissions to agents to attract them and their farmer clients from one company to another. When an agent switches from one company to another, the acquiring company increases market share, but there is no net benefit to the crop insurance program. On average, the nine companies in our review paid agents sales commissions of 16 percent of buyup premiums they sold in 1994 and 16.2 percent in 1995. However, one company paid more—about 18.1 percent of buyup premiums sold in 1994 and 17.5 percent in 1995. When this company, which accounted for about 15 percent of all sales in these 2 years, is not included in the companies’ average, commission expenses for the other eight companies averaged 15.6 percent of buyup premiums in 1994 and 15.8 percent in 1995. This company paid its agents about $6 million more than the amount it would have paid had it used the average commission rate paid by the other eight companies. According to FCIC officials, the agency plans to further study the issue of appropriate agent commissions. Employee travel is an essential part of selling and servicing crop insurance. Although FCIC has not provided specific guidance on appropriate expenses for travel, government travel regulations provide guidance as to what type of expenses might be appropriate when conducting business on behalf of the government. In our review of company-reported expenses, at eight of the nine companies we found instances of expenses that seemed to be excessive for conducting a taxpayer-supported program. For example, we found that one company in our sample for 1994 reported expenses of $8,391 to send six company managers (four accompanied by their spouses) to a 3-day meeting at a resort location. The billing from the resort included rooms at $323 per night, $405 in golf green fees, $139 in charges at a golf pro shop, and numerous restaurant and bar charges. Our sample for 1995 included a $31,483 billing from the same resort for lodging and other costs associated with a company “retreat” costing a total of $46,857. Furthermore, we found in one instance, as part of paying for employees to attend industry meetings at resort locations, a company paid for golf tournament entry fees, tickets to an amusement park, spouse travel, child care, and pet care. The company reported these as delivery expenses for crop insurance. Moreover, our samples of travel expenditures revealed instances of charges that appeared to involve the purchase of items not related to business. For example, at one company, our sample included charges to the company corporate charge card of $107 at a department store, $175 at a clothing store, $165 at a country club gift shop, $364 at a book and record shop, $41 at an airport gift shop, $209 at a resort gift shop, $208 at a hotel gift shop, and $928 from a cruise line. We found similar examples at five other companies. Some companies incurred expenses associated with maintaining their own travel fleet. For example, one company owned a corporate jet and another leased an aircraft. Both employed full-time pilots. Subsequent to the years involved in our review, both companies decided it would be more cost-effective to rely more heavily on commercial flights instead of owned or leased aircraft. The companies we reviewed varied widely with respect to furnishing automobiles—from providing only a few pool automobiles, to providing automobiles for a few officials, to providing automobiles for up to 45 percent of company employees. The types of vehicles also varied from luxury and sport utility to standard and economy. FCIC’s guidelines do not tell companies how they must spend their administrative expense reimbursement. However, in our opinion, if the current reimbursement provides companies with the opportunities to travel as described above, FCIC may be able to reduce its reimbursement rate and still reimburse companies for the reasonable expenses of selling and servicing crop insurance to farmers. Recruiting new employees and maintaining employee morale is a reasonable business expense. However, our review of company expenses showed that some companies’ entertainment expenditures appeared excessive for selling and servicing crop insurance to farmers. For example, one company spent about $44,000 in 1994 for Canadian fishing trips for a group of company employees and agents. It also spent about $18,000 to rent and furnish a sky box at a baseball stadium. Company officials said the expenditures were necessary to attract agents to the company. These expenditures were reported as travel expenses in 1994 and as advertising expenses in 1995. Moreover, the company’s 1995 travel expenses included $22,000 for a trip to Las Vegas for several company employees and agents. Similarly, our sample of company expenditures disclosed payment for season tickets to various professional sports events at two other companies; and six companies paid for country club memberships and related charges for various company officials and reported these as expenses to sell and service crop insurance. Companies also purchased promotional items as gifts for agents and employees. For example, our 1994 sample of expenditures at one company included $17,514 paid for 1,375 boxes of chocolates and $8,242 paid to purchase 2,000 cookbooks as gifts to agents and employees. While a number of the companies believe the type of expenses described above are important to maintaining an effective sales force and supporting their companies’ mission, we believe that most of these expenses appear to be excessive for a taxpayer-supported program. These entertainment expenses may be helpful in competing for agents, but it is not clear how these types of expenses directly benefit either the farmer or the government in the delivery of crop insurance to farmers. We did not exclude the above items from our determination of necessary delivery expenses because they were in categories that appear to be associated with crop insurance delivery. But FCIC agreed that these types of expenses may be excessive for a government-sponsored program like federal crop insurance. Several emerging factors affecting the crop insurance program have increased companies’ revenues or may decrease companies’ expenses. These factors include the following: higher crop prices and higher premium rates in 1996 and 1997 that resulted in higher premium income; expanded use of new types of revenue guarantee coverage (such as crop revenue coverage) that, for a higher premium, protects farmers against price drops between planting and harvest; and continuing simplification of program administrative requirements, potentially resulting in reduced company expenses. Higher crop prices and higher premium rates could enable FCIC to reduce the administrative expense reimbursement by about 3 percent of buyup premiums below the adjusted expense level determined in our analysis of companies’ 1994-95 expenses without diminishing service to farmers. New types of revenue guarantee coverage as well as simplification actions could serve to increase companies’ revenues or decrease companies’ expenses even further in the future. Each of these factors is discussed below. Two factors affecting the premiums paid by farmers have improved the income potential of crop insurance companies over the levels achieved in 1994 and 1995. These two factors are the (1) FCIC-projected market price of the commodity to be insured and (2) premium rate established by FCIC. When projected market prices and premium rates increase, the premiums that farmers pay increase. When the premiums that farmers pay increase, reimbursements to companies—which are currently paid on the basis of a percentage of premiums—increase proportionately without a proportionate increase in workload for the companies. As shown in table 2.2, the projected market price FCIC used in establishing crop insurance premiums for six major crops increased 9.2 percent from 1995 to 1997, after the 1994-95 period we reviewed. Furthermore, to improve the actuarial soundness of the program, FCIC has increased the basic premium rates that are the other principal component of the crop insurance premiums. From 1995 to 1996, basic premium rates for buyup insurance increased 3.6 percent, on average. FCIC projects premium rates to further increase in 1997. The increase in premium rates combined with the increase in crop prices resulted in an overall increase in premiums of about 13 percent. This increase occurred after the period we studied. As a result of this increase in premiums, companies will receive a proportionate increase in their administrative expense reimbursement, about 3 percent of premiums, unless FCIC reduces the reimbursement rate. The additional 3 percent of premiums—the 13-percent increase in premiums multiplied by the 27.1 percent of premiums that we determined represents companies’ expenses reasonably associated with the sale and service of buyup crop insurance in 1995—is in effect an unanticipated bonus to the companies and does not represent additional work for them. This means that FCIC, at current crop price and premium rates, could reduce the administrative reimbursement for buyup insurance by about 3 percentage points and still reimburse companies for the reasonable expenses associated with selling and servicing crop insurance. Conversely, if premiums decline, the companies would receive a proportionate decrease in their expense reimbursement. The increase in the companies’ reimbursement resulting from the higher premiums that have occurred since 1995 will not be accompanied by a proportionate increase in the companies’ workload. Company administrative work processes remain essentially the same regardless of the premium charged. For example, the cost of data entry and transmission is a function of the number of documents and data elements processed and transmitted, not the premiums those documents represent. Similarly, the cost of loss adjustment is a function of the frequency and nature of crop loss, not the premiums charged on the damaged crops. Thus, as premiums increase, the companies receive windfall increases in their income unless the reimbursement percentage is reduced. A second factor that may improve the companies’ income potential is the introduction of a more expensive form of crop insurance. In 1996, FCIC approved a privately developed revenue guarantee crop insurance policy on a pilot basis in seven states. In January 1997, FCIC’s board of directors authorized the expansion of this program to additional crops and states. The revenue guarantee policy protects producers against a decline in the value of the insured crop. The decline in value could occur because of crop loss, as with traditional crop insurance policies, or it could result from decline in commodity prices, or some combination of the two. Because of the increased risk borne by the revenue guarantee program, premiums are considerably higher than those charged for conventional crop insurance. Thus, because the companies’ reimbursement is based on a percentage of total premiums, they will receive higher reimbursements without a commensurate increase in workload. A recent FCIC proposal addresses the potentially high administrative reimbursement associated with this product by limiting the administrative reimbursement for the price-risk aspect of the program. A third emerging factor affecting the crop insurance program may aid the companies in reducing their administrative expenses. As part of implementing the 1994 crop insurance reform act, FCIC and the crop insurance industry jointly studied potential procedural changes that could result in simplifying or streamlining program delivery processes. As of January 1997, FCIC had completed action on 26 simplification projects identified by the study group and was continuing to study 11 additional potential changes. Simplification projects FCIC has implemented include restructuring actuarial documents, thereby reducing printed pages by providing actuarial documents electronically; simplifying processing of small claims; authorizing companies to correct obvious and incidental errors directly; integrating various options and endorsements into crop insurance policies; and implementing a single insurance policy format for most crops. Neither FCIC nor the companies could precisely quantify the amount of savings that can be expected from these changes, but they agreed that the changes were necessary and collectively may reduce costs somewhat. Industry representatives emphasized that FCIC should continue to emphasize simplifying the delivery procedures. FCIC officials agreed but noted that any changes must be carefully analyzed on the basis of their impact on the actuarial soundness of the crop insurance program. Appendix I provides a more detailed discussion of these changes and their potential effects. On the basis of our review of companies’ reported expenses and emerging factors in the crop insurance industry, we believe that the current expense reimbursement rate paid to participating companies exceeds the reasonable expenses associated with selling and servicing crop insurance. Our review showed that for 1994 and 1995, the actual expenses reasonably associated with the sale and service of buyup crop insurance for the nine companies in our review were about 27 percent of premiums—4 percentage points below the 31-percent base reimbursement rate paid to companies—and that FCIC could reduce rates another 3 percent of premiums because of higher crop prices and increased premiums in 1996 and 1997 that provided companies with higher reimbursements without any additional work. This would still provide participating companies with adequate reimbursement for the reasonable expenses associated with selling and servicing crop insurance. The 1994 reform act directs FCIC to reduce the overall reimbursement for buyup insurance to no more than 27.5 percent of total premiums in 1999. However, we believe that the administrative reimbursement rate can be reduced to a lower level at the current time—in the range of 24 percent. Our analysis also showed that the compensation for catastrophic insurance exceeded the companies’ expenses that can be reasonably associated with selling and servicing catastrophic insurance, although to a lesser extent. We recommend that the Secretary of Agriculture direct the Administrator of the Risk Management Agency to determine the administrative expense reimbursement rate that reflects the appropriate and reasonable costs of selling and servicing traditional buyup insurance and include this rate in the new agreement currently being developed with the companies; determine the compensation that reflects the appropriate and reasonable costs of selling and servicing catastrophic crop insurance and include it in the new agreement currently being developed with the companies; explicitly convey to participating insurance companies the type of expenses that the administrative reimbursement is intended to cover; and monitor companies’ expenses to ensure that the established rate is reasonable for the services provided. Overall, USDA’s Risk Management Agency agreed with the information presented in the draft report and its conclusions and recommendations. In its proposed 1998 standard reinsurance agreement with the private insurance companies, FCIC has included changes to the expense reimbursement rate for delivering both buyup and catastrophic insurance. Additionally, in this proposed agreement, FCIC has clarified the types of expenses that the administrative reimbursement is intended to cover, and it plans to monitor companies’ expenses in the future as a result of our review. USDA’s Risk Management Agency also examined the methodology used to conduct the review and found no fault in it. In responding to our report, the industry raised questions about the methodology we used in our analysis of companies’ reasonable delivery expenses, including (1) the time period we examined; (2) the standards we used to judge allowability of expenses; and (3) the applicability of emerging factors, such as increased premiums and higher crop prices. In addition, without being specific, the industry stated that a lower reimbursement rate—in the range of 24 percent—would “destabilize” the industry. With respect to the time period examined, we selected 1994 and 1995 to provide a picture of expenses for delivering crop insurance before and after the implementation of the 1994 reform act. Furthermore, these were the first 2 years that the industry consistently provided the detailed data in a format needed to fully analyze the expenses associated with the selling and servicing of crop insurance. The industry stated that we understated administrative expenses by using 2 years in which crop losses were relatively low. We disagree. Crop losses for buyup coverage in 1995 were equal to or higher than crop loss experiences throughout the 1990s, except for 1993. Furthermore, we found that high crop losses did not significantly increase companies’ loss-adjusting expenses—the delivery cost factor most likely to be affected by high crop losses. For example, for buyup insurance, while companies paid out $1.28 in loss claims for every dollar of premium received in 1995 and $0.58 in loss claims for every dollar of premium received in 1994, their related loss adjusting expenses as a percent of premium for these 2 years were not substantially different. Therefore, although losses were higher in 1995 than in 1994, the companies’ loss adjusting expenses for processing these claims did not increase commensurately. In addition, loss adjusting expenses are not a significant portion of total administrative expenses (about 3.5 percent of premiums on average for the nine companies we reviewed). Furthermore, since the 1980s, the crop insurance companies have received additional reimbursements in years of high crop losses. Second, the standards we used to identify reasonable costs for delivering crop insurance were developed on the basis of a number of different widely recognized accounting, insurance, and acquisition standards. FCIC agreed that the standards used were appropriate. We recognized all expenses reasonably associated with selling and servicing crop insurance. However, we continue to believe that the government should not be expected to reimburse companies for such expenses as those related to maximizing underwriting gains, acquiring other companies’ business, payments to executives to refrain from joining or starting other companies, payments to parent companies with no measurable benefits to the program, profit-sharing bonuses, and payments to lobbyists. We believe that these expenses should not be included in determining an appropriate future reimbursement rate for administrative expenses. Third, two factors that have emerged since the 1994-95 time period that we reviewed—higher premium rates and higher crop prices in 1996 and 1997—should be considered in evaluating the appropriate, future reasonable reimbursement rate because these factors did increase companies’ revenues without increasing expenses. Furthermore, USDA projects that crop prices will generally increase through 2005. If crop prices decline, FCIC could reevaluate the reimbursement rate. Finally, we disagree that a lower reimbursement rate—in the range of 24 percent—would destabilize the industry. Such a rate represents the companies’ current expenses that are reasonably associated with the sale and service of crop insurance and as a result should not diminish service to the farmer nor destabilize the program. Companies will still have the opportunity to realize underwriting profits. In 1994 and 1995, for example, the companies realized underwriting profits of $103 million and $133 million, respectively. (See apps. VIII and IX for the industry’s comments and our detailed response.) In 1995, farmers without crop insurance were required to purchase catastrophic risk protection insurance to participate in federal farm programs—a requirement that was rescinded in 1996. Farmers could purchase catastrophic insurance either from USDA’s FSA local offices or from an authorized local insurance agent. In 1995, it was more costly for the government to deliver catastrophic insurance through private companies than through USDA. When basic delivery costs were offset by income from farmer-paid processing fees, the costs to the government for selling and servicing catastrophic insurance in 1995 were comparable for both USDA and private companies. However, delivery through private companies was more costly to the government because the companies retained an estimated $45 million underwriting gain. In 1995, FCIC’s long-term target for underwriting gain was 7 percent on the premiums for which the companies retained risk. However, in 1995, the underwriting gain paid by FCIC to the companies was about 37 percent. FCIC is currently studying the issue of an appropriate long-term rate of return for companies participating in the program. Legislation passed in 1996 requires USDA to move delivery of catastrophic insurance solely to private companies, where feasible. In 1995, the total cost to the government to deliver catastrophic insurance was less when provided through USDA than through private companies. The total cost to the government to deliver catastrophic insurance consists of three components: (1) basic sales and service delivery costs, (2) offsetting income from processing fees paid by farmers, and (3) company-earned underwriting gains. When only the first and second components were considered, the costs to the government for both delivery systems were comparable. However, the payment of an underwriting gain to companies, the third component, made the total cost of company delivery more expensive to the government. With respect to the first component—the costs of basic sales and service delivery—the cost to the government was higher when provided through USDA. The costs of basic sales and service for USDA’s delivery included expenses associated with activities such as selling and processing policies, developing computer software, training adjusters, and adjusting claims. This cost also included indirect or overhead costs such as general administration, rent, and utilities. Included in the 1995 direct and indirect costs for USDA delivery was the Department’s one-time start-up costs for establishing the USDA delivery system. Direct costs for basic delivery through USDA amounted to about $91 per crop policy, and indirect costs amounted to about $42 per crop policy, for a total basic delivery cost to the government of about $133 per crop policy. Appendix V provides more detail on the components of total government costs to deliver catastrophic insurance through USDA and insurance companies. The basic delivery cost for company delivery consists of the administrative expense reimbursement paid to companies by FCIC and the cost of administrative support provided by USDA. The administrative expense reimbursement amounted to about $73 per crop policy, and USDA’s support costs amounted to about $10 per crop policy, for a total basic delivery cost to the government for company delivery of about $83 per crop policy. The second component—offsetting income from farmer-paid processing fees—reduced the basic delivery cost to the government for both delivery systems, but had a much larger impact in reducing the cost to the government for the USDA delivery system. In 1995, farmers buying catastrophic insurance were required to pay a $50 processing fee for each crop they insured, up to certain limits. For USDA’s delivery, processing fees paid by farmers reduced the government’s basic delivery cost of about $133 by an average of $53 per crop policy. For company delivery, fees paid by farmers and remitted to the government reduced the government’s basic delivery cost of about $83 by $7 per crop policy. For company delivery, the effect on the cost to the government was relatively small because the 1994 reform act authorized the companies to retain the fees they collected from farmers up to certain limits. Only those fees that exceeded these limits were remitted back to the government. Combining the basic sales and service delivery costs and the offsetting income from farmer-paid processing fees, the government’s costs were comparable for both delivery systems. The third component—underwriting gains paid by FCIC only to the companies—is the element that made delivery through USDA less expensive. The insurance companies can earn underwriting gains in exchange for taking responsibility for any claims resulting from those policies for which the companies retain risk. In 1995, companies earned an underwriting gain of an estimated $45 million, or about a 37-percent return on the catastrophic premiums for which they retained risk. This underwriting gain increased the government’s delivery cost for company delivery by $127 per crop policy. Underwriting gains are, of course, not guaranteed. In years with a high incidence of catastrophic losses, companies could experience net underwriting losses, meaning that they would have to pay out money from their reserves in excess of the premiums paid to them by the government, potentially reducing the government’s total cost of company delivery in such years. Table 3.1 summarizes the three components of the government’s cost to deliver catastrophic insurance through USDA and companies in dollars per crop policy for 1995. The table shows that, overall, the government’s cost for delivering catastrophic insurance through USDA was about $124 less per crop policy than the delivery cost through companies in 1995. The 1995 catastrophic underwriting gain of about 37 percent was the critical component in the difference in comparative costs between USDA and company delivery. This gain was substantially higher than FCIC’s established long-term target of 7 percent for underwriting gains on the catastrophic premiums for which the companies retain risk. According to FCIC’s Senior Actuary, the large underwriting gain in 1995 may have been unusual. However, the program’s experience in 1996 suggests that the large underwriting gain in 1995 may not be that unusual; 1996 underwriting gains were even higher—about $58 million. FCIC is currently studying the issue of an appropriate long-term rate of return for companies participating in the program. Beginning with crops harvested in 1997, the Federal Agriculture Improvement and Reform Act of 1996 requires that USDA’s delivery of catastrophic insurance be transferred to private companies in areas where there are sufficient private company providers. In July 1996, the Secretary of Agriculture, after consultation with approved insurance providers, identified 14 states in which USDA would no longer deliver catastrophic policies. Effective for the 1997 crop year, catastrophic policyholders in these 14 states who purchased catastrophic coverage from USDA were either to select a local private company or be assigned by USDA to a local private company. The 14 states are Arizona, Colorado, Illinois, Indiana, Iowa, Kansas, Minnesota, Montana, Nebraska, North Carolina, North Dakota, South Dakota, Washington, and Wyoming. According to the American Association of Crop Insurers, crop insurance industry executives unanimously support securing the remaining 36 states for private delivery, beginning with crops harvested in 1998. According to the Federal Agriculture Improvement and Reform Act of 1996, the Secretary of Agriculture must make the announcement for any additional states where USDA delivery is to be phased out by April 30 of the year preceding the year in which the applicable crops will be harvested. If only 1995 is considered, the delivery of catastrophic insurance through USDA is less expensive to the government than through companies because of the underwriting gains companies earned. These gains, 37 percent of catastrophic premiums on which the companies retained risk, were far higher than FCIC’s long-term target gain of 7 percent. Over time, gains and losses may offset each other, and the target gain may be realized. However, if underwriting gains do not become more commensurate with FCIC’s target gain, the potential for high government costs and high company profits will continue. FCIC is aware of this situation and is currently studying the issue of an appropriate long-term rate of return for companies participating in the program. Furthermore, this issue of potentially high costs and high profits takes on added importance because of the requirements of the Federal Agriculture Improvement and Reform Act of 1996. This act requires USDA to transfer its delivery of catastrophic insurance to private companies in areas where there are sufficient private company providers. We recommend that the Secretary of Agriculture direct the Administrator of the Risk Management Agency to closely monitor the experience of the catastrophic insurance program to ensure that over time the underwriting gains earned on catastrophic insurance by the companies do not routinely exceed FCIC’s long-term target. FCIC agreed with our conclusions and recommendation and has already changed the proposed 1998 standard reinsurance agreement to ensure that underwriting gains on catastrophic insurance will be more closely in line with its long-term target. The industry, however, questioned our methodology for comparing the cost to the government of the USDA and company delivery systems. Specifically, it stated that the processing fees paid by farmers and the underwriting gains paid to companies should not be considered in analyzing the costs to the government for catastrophic insurance delivery. It also suggested that restricting our analysis to 1995 provided a distorted picture of underwriting gains because it only represented 1 year’s experience. It further stated that our analysis did not take into account that, in its view, the quality of service provided to farmers by the companies was much higher than that provided by USDA. We disagree that an analysis of the comparative costs to the government of company- and USDA-delivered catastrophic insurance should exclude the processing fee and underwriting gains components. In computing the overall costs to the government, all revenue and payment components have to be considered. With respect to the industry’s concern about our period of analysis, 1995 was the only year in which a comparative assessment could be made at the time we conducted our review because it was the only year in which both USDA and the companies were delivering catastrophic insurance. Since then, however, we note that underwriting gains paid to the companies in 1996 exceeded those paid in 1995. This would serve to make the cost to the government for company-delivered catastrophic insurance even higher. With respect to the issue of comparative service quality, we did not make this a principal focus of our review. However, during the course of our work, we found little to suggest that the service provided by companies or USDA was less than satisfactory. The industry’s comments also indicate that it believes our conclusions might mislead public policymakers by implying that delivery of catastrophic insurance by private industry should be reduced. We do not believe that this is the case. We did not conclude or recommend that the industry should have its role in catastrophic insurance delivery reduced. We do hold the view, however, that the level of underwriting gain paid to the companies should be managed so that it more closely follows FCIC’s target. The current method for reimbursing administrative expenses for buyup insurance—whereby FCIC pays private companies a fixed percentage of premiums—has certain advantages, including ease of administration. However, expense reimbursement based on a percentage of premiums does not necessarily reflect the amount of work or cost involved to sell and service crop insurance policies. We identified four alternative reimbursement arrangements that offer the potential to reduce FCIC’s reimbursements and to more closely match reimbursements with expenses. Each has advantages and disadvantages. Industry leaders prefer FCIC’s current reimbursement method because it is relatively simple to administer and because they believe that most alternatives could reduce their reimbursements. Through our discussions with FCIC and crop insurance industry officials, we identified the following four alternatives to the current expense reimbursement method that offer potential cost savings to the government and may more closely match FCIC’s reimbursements with companies’ expenses: place a cap on the amount reimbursed per policy; reimburse companies a flat fee per policy, plus a reduced percentage of reimburse companies according to a schedule of allowable expenses; and reduce reimbursement rates as companies’ total premium volume increases. Currently, FCIC calculates administrative expense reimbursements by multiplying companies’ total written premiums by a set reimbursement percentage, regardless of the expenses incurred by the company to sell and service crop insurance. Table 4.1 shows the 1995 distribution of premiums and reimbursements for certain buyup policies for all participating companies. Under the current reimbursement arrangement, as policy premiums increase, the companies’ reimbursement from FCIC for administering the policies increases. However, the workload, or cost, associated with administering the policy generally does not increase proportionately. Therefore, for policies with the highest premiums, there may be a large differential between FCIC’s reimbursement and the costs incurred to administer those policies. For example, in 1995, the largest 3 percent of the policies received about one-third of the total reimbursement. In fact, the five largest policies in 1995 had reimbursements ranging from about $118,000 to $472,000. FCIC could reduce its total expense reimbursements to companies by capping, or placing a limit on, the amount it reimburses companies for the sale and service of crop insurance policies. For each crop insurance policy written, an insurance company must perform some minimum level of work, regardless of the premium. The company, usually through an agent, must obtain, record, and process certain basic policy information. The company performs additional work that varies, generally depending on the size of the farm and value of the crops insured. A larger farm may require more time to measure and inspect the component fields and more contacts with the farmer. This alternative is designed to recognize both the fixed and variable aspects of selling and servicing crop insurance policies. For example, FCIC could reimburse companies a fixed amount (such as $100) for each policy written to pay for the fixed expense associated with each policy. In addition, FCIC could pay a percentage of premiums to compensate companies for the variable expenses associated with the size and value of a policy. Administrative expense reimbursements could be tied to the cost of performing specific services that benefit the crop insurance program. For example, most government contractors are paid on the basis of the Federal Acquisition Regulation (FAR), which establishes a schedule of allowable expenses. Using the FAR, a contractor providing goods and services to the federal government submits a bill that is audited against a schedule of allowable expenses, and subsequently, the government pays an adjusted amount to the contractor, if appropriate. Using this approach, the amount paid would include only reimbursement for allowed expenses. FCIC could limit the overall reimbursement rate and limit the reimbursement rate for specific components, such as commissions, data processing, and travel. Companies could also be required to follow federal guidelines to reimburse employees or contractors for any travel. Assuming companies can realize some economies of scale for certain cost items, FCIC could reduce the reimbursement rates for individual companies as their written premium volumes increase. For example, some expenses, such as underwriting and overhead, are based on fixed expenses, such as investments in equipment and facilities, annual training, and state licenses and fees. These types of fixed expenses decrease as a percent of total premiums written as premium volume increases. Currently, FCIC pays the same percent of written premiums to participating companies regardless of the companies’ size of operation or premium amount written. Under this alternative, FCIC would reimburse companies on a sliding scale based on premium volume. We found that all four alternatives have the potential to reduce FCIC’s reimbursement for administrative expenses. Each alternative, however, has advantages and disadvantages compared with the current reimbursement arrangement. For example, some alternatives have the advantage of possibly encouraging smaller companies to participate in the program. On the other hand, some alternatives have the potential disadvantage of increasing the administrative burden on FCIC or decreasing incentives for participating companies to deliver crop insurance. The potential advantages and disadvantages of each alternative are discussed below. Under this alternative, FCIC could realize the largest amount of administrative reimbursement savings while only affecting a relatively small percentage of policies. This alternative would eliminate high reimbursement payments for large or high-premium policies. To illustrate, to calculate potential cost savings using this alternative, we capped the administrative expense reimbursements on individual policies at three different levels—$1,550, $3,100, and $6,200—affecting about 9, 3, and 1 percent, respectively, of policies in 1995. Potential savings generated from this alternative would depend at what level the cap was established, as shown in table 4.2. As shown in the table, a $3,100 cap would have created a $40.3 million savings while reimbursing companies at the 31-percent reimbursement level for more than 95 percent of the policies written in 1995. Only about 3 percent of all policies written would have been affected by using a $3,100 cap on reimbursements. Decreasing the cap to $1,550 would have provided savings to the government of about $74 million while limiting reimbursements on less than 10 percent of the policies written in 1995. Although offering the potential for significant cost savings, this alternative has the disadvantage of possibly discouraging some companies from aggressively marketing larger crop insurance policies for FCIC. This alternative offers a potential for cost savings that is somewhat less than capping reimbursements at $1,550 per policy, but it may encourage companies to sell small-premium policies. To illustrate the potential for cost savings, we selected three different reimbursement combinations. As shown in table 4.3, if FCIC reimbursed companies a fixed $100 reimbursement per policy plus 17.5 percent of the premiums, the overall average reimbursement rate would be 22.8 percent. Compared with the 1995 reimbursement method, this approach would produce a savings of 8.2 percent of premiums, or about $67 million, from the 31 percent reimbursement rate. Table 4.3 also illustrates other reimbursement combinations. Because one component of the reimbursement would be a flat fee regardless of premium size, reimbursements for small, or low-premium, policies under this alternative may exceed reimbursements for these kinds of policies under the current system. This may encourage sales and service to smaller farmers, a goal advanced by some crop insurance observers. This alternative has the further advantage of more closely matching FCIC’s reimbursement to the administrative workload of the companies and their agents. Finally, unlike the previous alternative that capped reimbursements, reimbursements under this alternative would still be linked in part to premiums. Therefore, companies will continue to have an incentive to sell higher coverage. This alternative has the disadvantage of requiring FCIC to more closely monitor companies to ensure they do not generate additional policies solely to increase their revenue. This alternative would offer FCIC the opportunity to better control the expenses to be reimbursed by paying participating companies according to a schedule of allowable expenses for performing specific services, such as selling and writing a policy, processing a policy, and adjusting claims. Companies could be required to reimburse employees or contractors for any travel according to federal reimbursement guidelines for travel. Using the FAR, a contractor providing goods and services to the federal government submits a bill that is audited against a schedule of allowable expenses, and subsequently, the government pays an adjusted amount to the contractor, if appropriate. Savings under this alternative would depend upon the rates agreed to by FCIC and the companies. In addition, this alternative could provide participating companies with additional protection during years with high crop losses by reimbursing them for the actual loss-adjusting expenses they incur. A major disadvantage of this alternative is that FCIC would need to increase its oversight of participating companies’ financial operations. FCIC would need to draft and approve additional regulations, audit expense vouchers against a schedule of allowable expenses, and require participating companies to follow additional regulations. This alternative offers the advantage of potential cost savings and may encourage smaller companies’ participation in the program. Some industry observers have expressed concern at the decline in the number of participating companies—from 49 in 1985 to 19 in 1995. For this reimbursement alternative, companies could be reimbursed at a higher rate for their first level of business and at a reduced rate at higher premium levels. To illustrate, we calculated results using declining reimbursement rates for premium levels of $20 million and below; over $20 to $50 million; over $50 to $100 million; and over $100 million. Table 4.4 shows the results of our analysis. At the indicated premium levels, in 1995, this alternative had the potential to save the government about $20.4 million in administrative expense reimbursements while having minimal or no impact on participating companies. Of the 19 participating companies, 10 wrote total premiums of $20 million or less, and therefore this alternative would have had no effect on the amount of reimbursements paid to these 10 companies. Only 3 of the 19 companies wrote premiums in excess of $100 million. Compared with the current system, this alternative would have the effect of favoring smaller companies over larger companies. To the extent that smaller or nonparticipating companies perceive that larger companies do not have a competitive advantage based on the size of operations, they may see increased opportunities to stay in or enter the industry. This outcome would be viewed as an advantage by those who want to see an increase in the number of participating firms. A disadvantage of this alternative is that it could discourage some larger companies from aggressively delivering crop insurance for FCIC. Furthermore, to the extent that selling and servicing crop insurance policies are subject to economies of scale, such economies may not be achieved if companies do not expand their operations. According to crop insurance industry officials, participating companies generally prefer the current reimbursement arrangement because they believe that most alternatives would reduce their reimbursements and increase their administrative workload. Officials at some participating companies also said that alternative arrangements would reduce their incentives to deliver federal crop insurance if their overall revenues from reimbursements were reduced. Several company officials also stated that any reduced administrative reimbursements would increase the need for FCIC to provide additional opportunities for underwriting gains. In addition to continuing the current reimbursement arrangement, participating companies want FCIC to simplify administrative requirements. They believe some of the existing requirements are needlessly costly and unnecessary to ensure the integrity of the program. Appendix I provides more information about FCIC’s efforts to simplify crop insurance program administration. USDA’s Risk Management Agency concurred with our draft report’s treatment of alternative reimbursement arrangements. In its 1998 standard reinsurance agreement, FCIC has proposed using the second alternative—having the government pay a flat amount per policy and a percentage of premiums. The crop insurance industry stated that we made recommendations to make major changes to the reimbursement system and that these changes would most likely, among other things, greatly undermine agents’ compensation. We did not recommend one alternative over another or over the current system but instead described the arguments for and against the major alternatives that we identified. In so doing, we were complying with the 1994 mandate. Furthermore, throughout our report and in this chapter, we focused on the effects on companies, not on the agents. Companies may compensate their agents in ways that they consider appropriate, regardless of the companies’ arrangement with the government. (See apps. VIII and IX.)
Pursuant to a legislative requirement, GAO reviewed the financial arrangements between the Federal Crop Insurance Corporation (FCIC) and participating insurance companies for delivering crop insurance to qualified producers, focusing on the: (1) adequacy of the current administrative reimbursement rate for expenses of participating crop insurance companies; (2) comparative cost to the government in 1995 of private companies' and the Department of Agriculture's (USDA) delivery of catastrophic insurance; and (3) advantages and disadvantages of different expense reimbursement alternatives. GAO noted that: (1) in 1994 and 1995, the government's administrative expense reimbursement to insurance companies was greater than the companies' expenses to sell and service federal crop insurance; (2) for the 2-year period, companies reported expenses that were less than the reimbursements paid to them by FCIC; (3) furthermore, GAO found that some of these reported expenses did not appear to be reasonably associated with the sale and service of federal crop insurance and accordingly should not be considered in determining an appropriate future reimbursement rate for administrative expenses; (4) in addition, even within the expense categories reasonably associated with the sale and service of crop insurance, GAO found expenses that appeared excessive for reimbursement under a taxpayer-supported program suggesting an opportunity to further reduce future reimbursement rates; (5) these expenses included agents' commissions that exceeded the industry average, unnecessary travel-related expenses, and questionable entertainment activities; (6) finally, higher premiums in the crop insurance program have had the effect of increasing the government's reimbursement to companies for the time period GAO examined; (7) at the same time, companies' expenses associated with crop insurance sales and service could decrease as FCIC reduces the administrative requirements with which the companies must comply; (8) combined, all these factors indicate that FCIC could lower the reimbursement rate and still amply cover companies' reasonable expenses for selling and servicing federal crop insurance policies; (9) in 1995, the government's costs to deliver catastrophic insurance were higher through private companies than through USDA; (10) although the basic costs associated with selling and servicing catastrophic crop insurance through USDA and private companies were comparable, delivery through USDA avoids paying an underwriting gain to companies in years when there is a low incidence of catastrophic loss claims; (11) in 1995, the underwriting gain to participating companies for catastrophic insurance totalled about $45 million; (12) in 1996, the underwriting gains were even higher; (13) GAO identified a number of different approaches to reimbursing companies for their administrative expenses that offer the opportunity for cost savings; (14) each has advantages and disadvantages compared with the existing reimbursement arrangement; and (15) companies generally prefer the existing reimbursement method because it is relatively simple to administer.
The Trust manages the interior 80 percent of the Presidio, while the Park Service manages the remaining 20 percent, essentially the coastal areas. Figure 1 shows the area managed by the Park Service (Area A) and the area managed by the Trust (Area B). The Trust’s area of responsibility includes 729 commercial and residential buildings and structures encompassing almost 6 million square feet of floor space. The Presidio was designated as a National Historic Landmark in 1962. Included in this designation are more than 400 buildings and the Presidio’s landscape. As such, any new development or proposed changes to the Presidio’s historic buildings and its landscape are guided by rehabilitation standards established by the Secretary of the Interior and the Park Service. In Public Law 104-333, the Congress gave the Trust wide latitude for managing, preserving, and protecting the Presidio in its effort to achieve financial self-sufficiency by 2013. The Trust has the authority to, among other things, guarantee loans to tenants who finance capital improvements of Presidio buildings, manage building leases, borrow up to $50 million from the U.S. Treasury, and demolish buildings that it deems to be beyond cost-effective rehabilitation. The Trust is managed by a 7-member Board of Directors. The President of the United States appoints six members and the Secretary of the Interior or her designee is the seventh member. Board members, who are not compensated, are generally appointed to 4-year terms and can be reappointed; however, no Board member may serve more than 8 consecutive years. The Board must hold three meetings per year, two of which must be open to the public. An executive director oversees the daily operations of the Trust and, as of May 30, 2001, managed a 468-member staff. The Trust is organized into an office of general counsel and six divisions, each managed by a deputy director; these managers report directly to the executive director. The Trust can set the compensation and duties of the executive director and staff as it deems appropriate. For fiscal year 2001, the Trust projects that its revenues will be $79.4 million. Figure 2 shows the Trust’s projected revenues from all sources for fiscal year 2001. The Trust’s projected expenditures for fiscal year 2001 are $79.4 million. Figure 3 shows the Trust’s projected expenditures for fiscal year 2001, including operations costs such as salaries, day-to-day operations, costs associated with future planning efforts, and public safety. From 1997 through March 30, 2001, the Trust spent about $15.4 million to repair and replace the Presidio’s infrastructure, upgrading roads and grounds, telecommunications systems, electrical, and water and sewer systems. From April 1, 2001, through the end of fiscal year 2002, the Trust plans to spend an additional $7 million and, with these expenditures, about 80 percent of the electrical, water, sewer, and telecommunications upgrades will be complete. According to the Trust, these improvements have increased health and safety systems, enhanced park resources, and prepared more of the Presidio for residential and commercial tenants. The major infrastructure upgrades undertaken, and their approximate costs to date, were the following: $6.0 million on electrical upgrades by replacing or repairing 12,000 linear feet of existing lines and installing an additional 10,000 linear feet of lines in support of the Trust’s residential and commercial leasing programs; $5.4 million to upgrade telecommunications capacity including increasing the number of available lines from 8,000 to 21,000 in support of the Trust’s residential and commercial leasing programs; $2.4 million for sewer system upgrades including replacing about 7,000 linear feet of sewer lines; $1.1 million for water system upgrades, including replacing and repairing old water lines that were leaking millions of gallons of water a week; and $0.5 million to improve roads, trails, sidewalks, and grounds to enhance resident and visitor facilities. To preserve the Presidio’s many historic buildings and generate the revenues needed to achieve financial self-sufficiency by 2013, the Trust has spent about $23 million to repair and rehabilitate residential housing units and commercial buildings for lease. Figure 4 shows the interior of a residential housing unit before and after rehabilitation and figure 5 shows a commercial building before and after rehabilitation. The funds to repair and rehabilitate these facilities have come primarily from congressional appropriations and rental revenue. The Presidio has 1,198 housing units contained in 349 buildings, 155 of which are historic. According to the Trust’s residential leasing records, as of January 2001, 869 residential housing units contained in 247 buildings were leased. Appendix I provides more specific information about the number and types of buildings the Trust manages. The Trust also manages 306 commercial buildings—225 of which are historic—that contain 3.86 million square feet of space. Currently, the Trust has about 1 million square feet of commercial space rented to private entities. The Trust and the Park Service occupy another 660,000 square feet of commercial space. Private entities have spent about $40.8 million to repair and rehabilitate commercial space that they subsequently leased from the Trust. In these cases, the Trust generally reduces rental rates to recognize the private entity’s investment. The Trust has embarked on a number of initiatives to clean up and restore the Presidio’s environment. These initiatives include assuming the Army’s responsibility for cleaning up the contamination left at the Presidio from over 2 centuries of use as a military post, restoring Mountain Lake, and restoring the Presidio’s vegetation and forest. In October 1994, when the Army transferred jurisdiction of the Presidio to the Park Service, the Army retained the lead agency responsibility for cleaning up contamination. The Army began cleanup activities primarily in Crissy Field in Area A—an area now managed by the Park Service. In May 1997, the Army announced an updated plan for continued cleanup of the Presidio. The Army’s plan however, was criticized by local neighborhood groups, the Sierra Club, and the Trust because its cleanup strategy relied primarily on monitoring contaminated sites and restricting public land use, rather than removing contamination. Also, the Army’s cleanup plan was expected to cover a 30-year period. In May 1998, the Trust presented to the Army its own assessment of the cleanup plan for the Presidio that was designed to address the areas criticized in the Army’s plan. The Trust also proposed that the Army delegate its cleanup authority to the Trust to expedite the cleanup activities. In May 1999, the Army, the Trust, and the Department of the Interior signed a memorandum of agreement transferring cleanup responsibility to the Trust. Under the agreement, the Army will pay the Trust $100 million to clean up both Areas A and B. The Trust is responsible for all currently known contamination; the Army remains responsible for any unknown contamination that may be discovered. The Trust also purchased a $100 million insurance policy for $6.7 million in the event that cleanup costs exceed the $100 million paid by the Army. The Trust plans to complete the environmental cleanup by 2010. As of March 31, 2001, the Trust had spent about $12 million for cleanup activities. Almost 80 percent of the expenditures to date were for insurance premiums, program management and administration, planning, and oversight. The remaining funds were used for cleanup activities including monitoring groundwater, evaluating landfills, removing contaminated soil, and removing lead pellets. Mountain Lake is one of the few remaining natural lakes within the city of San Francisco. It is a popular destination for visitors and residents and provides habitat for many species of birds and plants. Over the years, the depth of the 4-acre lake has fallen from 30 feet to less than 10 feet. In addition, the lake’s water quality has deteriorated because of sedimentation, runoff, and the byproducts of nearby road construction. The National Park Service, Golden Gate National Park Association, and the Trust have jointly sponsored a public planning process, including community forums, site research, and other technical analyses, to produce a plan to restore the lake and adjacent shoreline, which encompass a total of 14 acres. A two-phased plan and an environmental assessment were completed and adopted in the spring of 2001. The goals of the plan are to improve water quality, enhance the habitat, and improve visitor experiences. The first phase will consist of dredging and aerating the bottom of the lake; removing nonnative trees and vegetation and replacing them with native species; and planting native trees and shrubs to buffer the lake from the roadway, improving trails, and constructing overlooks and interpretive exhibits. The Trust estimates that phase one of the plan will cost $677,000 and should be completed by the fall of 2002. This cost estimate assumes that all removed sediment will be disposed of at a site on the Presidio. If the lake’s sediment is found to be contaminated, however, it will require off- site disposal and result in additional costs. The San Francisco International Airport Authority provided $500,000 for phase one as approved mitigation for filling in wetlands for the airport’s new terminal. The second phase will be initiated within 2 to 5 years after the completion of the first one. This phase consists of removing an additional 4.3 acres of weeds around the shoreline and replanting the area with native plants as well as construction of a bridge. The Trust’s preliminary estimate is that phase two will cost from $600,000 to $750,000. The preservation and enhancement of the Presidio’s natural resources, including its vegetation, is one of the Trust’s goals. The Presidio contains more than 230 native plant species and a 300-acre forest of eucalyptus, Monterey cypress, and Monterey pine trees. Over the years, human activities and the overgrowth of nonnative plants have threatened the Presidio’s landscape and native vegetation. Also, many of the trees planted a century ago as part of the Army’s beautification project are nearing the end of their natural life span and need restoration. Working in partnership with the Trust, the Park Service developed the Vegetation Management Plan to preserve and enhance native landscapes and to extend the life of the park’s forest over the coming decades. Initially, the Trust and the Park Service will collaborate on a number of pilot projects designed to test and establish effective restoration techniques for future vegetation management projects. Over the next 5 years, the Trust expects to spend $9 million on pilot projects aimed at restoring and nurturing the Presidio’s vegetation and forest. The Trust has made substantial progress in repairing, rehabilitating, and leasing buildings since taking over responsibility for its portion of the Presidio in July 1998. Revenue from residential and commercial leases is the Trust’s primary source of revenue, and these leases will play a more important role as the Trust’s federal appropriation declines and then ends in fiscal year 2012. In fiscal year 2000, residential leases generated $13.3 million in revenue. Currently, more than 52 percent of the occupied residential units are leased at market rental rates averaging about $2,910 per month. The remaining occupied units are rented under several discounted rental programs whereby tenants, such as public safety personnel, Presidio employees, and students, pay less than market rental rates. Monthly rental rates under these programs average about $1,375. By the end of fiscal year 2001, the Trust expects to have available for rent an additional 140 residential housing units. The Trust anticipates that it will generate about $21 million in revenue in fiscal year 2001 from residential housing. In fiscal year 2000, commercial leases generated $6.3 million in revenue. Overall, leases for commercial space average less than $10 per square foot, with nearly 79 percent of the total square footage leased averaging slightly more than $3 per square foot. Many of these leases are to tenants who financed the cost of restoring buildings they occupy in exchange for rental offsets and tax credits allowed for the restoration of historic buildings. Other leases are with community organizations that pay only their pro rata share of common area, infrastructure, and security costs. In fiscal year 2001, the Trust is offering an additional 227,000 square feet for lease or rehabilitation; this is expected to raise fiscal year 2001 commercial lease revenues to about $9 million. Appendix II provides information on the Trust’s residential and commercial leasing programs. While the Trust has been successful in leasing residential and commercial space, it still has a considerable amount available for rehabilitation and leasing. As of January 2001, the Trust had 329 housing units that were either vacant or awaiting rehabilitation. Similarly, the Trust has 2.2 million square feet of commercial space that could be made available once a decision is made on the use of the space and it is repaired or rehabilitated. Of the 2.2 million square feet, 900,000 square feet will be used for a digital arts center at the Letterman Hospital site. The development agreement for this project was signed on August 14, 2001. When this project is completed, the Trust expects to receive about $5.8 million annually in rent plus an annual service district charge. In July 2000, the Trust began a planning process to create a plan for the future development of its portion of the Presidio. As part of this process, the Trust considered a number of alternatives for future development. The Trust used a financial model to prepare a financial analysis for each of the alternatives it considered and, under every alternative, the model projected that the Trust could become financially self-sufficient by 2013. The Trust issued its Draft Implementation Plan, which contained its proposed action called the “Draft Plan Alternative,” as well as a draft environmental impact statement on July 25, 2001. After a public comment period, the Trust expects to issue a final plan and final environmental impact statement by early 2002. Key to the financial model were the assumptions the Trust used which appear to be conservative and to provide estimates of future revenues at the lower end of potential estimates. After choosing the final development plan, the Trust should refine the model and prepare a new financial forecast of operating results under that plan because projections used in the planning process were designed only as tools to test the comparative economic implications of various alternatives. Since assuming responsibility for its portion of the Presidio, the Trust has managed the Presidio using the Park Service’s 1994 General Management Plan Amendment. In July 2000, the Trust began to update this plan. The new planning process, called the Presidio Trust Implementation Plan (Implementation Plan), was needed, according to the Trust, because some of the assumptions the Park Service had based its 1994 General Management Plan Amendment on had changed significantly since it was adopted. Specifically: The Park Service’s 1994 General Management Plan Amendment assumed that annual appropriations in the range of $16 million to $25 million would be received. However, Public Law 104-333, which created the Trust, mandated that the Trust become financially self- sufficient by 2013. Even after the Presidio closed, the 6th U.S. Army had been expected to occupy up to 30 percent of the Presidio’s buildings; however, it has vacated the Presidio. The University of California at San Francisco had planned to locate its research facilities at the Letterman Hospital, but did not do so. The Implementation Plan process began in July 2000 with a 6-month “scoping” period and information-gathering process through workshops in which the public helped define the range of issues and topics to be included in the Implementation Plan. In mid-November 2000, the Trust published its Conceptual Alternatives Workbook, which contained five alternatives for the Presidio’s future development. Public comments were solicited on the alternatives until January 16, 2001. On July 25, 2001, the Trust released the Presidio Trust Draft Implementation Plan and draft environmental impact statement, which described and analyzed six alternatives for future development of the Presidio. Two of the alternatives have, thus far, received the most attention from the public. One, referred to as the “General Management Plan Amendment 2000 alternative,” would implement the Park Service’s 1994 General Management Plan Amendment, assuming the year 2000 conditions. The Trust stated that it modified this alternative from the Conceptual Alternatives Workbook because many neighborhood and environmental groups had commented that they preferred an alternative that was patterned after the Park Service’s 1994 General Management Plan Amendment plan but modified to make it financially feasible. The Trust developed another alternative, also in response to public comments, entitled the “Draft Plan Alternative,” which is its proposed action for future development of the Trust’s portion of the Presidio. The Trust stated that this alternative is the proposed action because it is patterned after the General Management Plan Amendment 2000 alternative but includes modifications to ensure its financial viability and to combine a number of concepts proposed in the Conceptual Alternatives Workbook into a single alternative. These concepts include expansion of open space, no reduction in housing units, and a variety of cultural and educational programs for visitors. Appendix III contains a summary of the alternatives the Trust considered. The public has until October 25, 2001, to provide comments on the Draft Plan Alternative and draft environmental impact statement. The Trust envisions concluding the planning process with the publication of a final plan and final environmental impact statement in early 2002. The Trust’s Draft Plan Alternative contains many of the features of the General Management Plan Amendment 2000 alternative, but there are several noteworthy differences between the two. For example, total development at the Presidio under Draft Plan Alternative is 5.6 million square feet, or about 6 percent less than current levels, rather than just over 5 million square feet discussed in the General Management Plan Amendment 2000 alternative. Furthermore, the Draft Plan Alternative permits 50,000 square feet less in building demolition than the General Management Plan Amendment 2000 alternative and replacement buildings under the Draft Plan Alternative could increase by about 540,000 square feet over the General Management Plan Amendment 2000 alternative. Finally, the Draft Plan Alternative calls for 880 more residential housing units than the General Management Plan Amendment 2000 alternative— more than doubling the projected number of residents at the Presidio. Appendix IV contains a comparison of the land use patterns envisioned by the alternatives. The Draft Plan Alternative assumes expenditures of $10 million annually for Trust programs rather than the $2 million annually provided for in the General Management Plan Amendment 2000 alternative. The difference is due, in part, to the Trust providing programs rather than only the tenants as the General Management Plan Amendment 2000 alternative assumed. According to the Trust, a wide variety of program possibilities would be available including interpretive programs for visitors as well as museums, exhibitions, and community programs. Also, the projected number of annual visitors under the Draft Plan Alternative is 60 percent higher than the projection in the General Management Plan Amendment 2000 alternative. Finally, total capital construction costs under the Draft Plan Alternative would be $61 million higher than under the General Management Plan Amendment 2000 alternative. Appendix V contains a comparison of the capital costs among the alternatives. The Trust’s analysis of the public comments it received before releasing the Draft Plan Alternative, indicated that many of those commenting noted concerns with the proposed plans compatibility with the Park Service’s 1994 General Management Plan Amendment. Public reaction when the Draft Plan Alternative was released indicated that many believed that the Draft Plan Alternative contained too much development and that the Trust should not have abandoned the Park Service’s 1994 General Management Plan Amendment. As part of the planning process, the Trust used a financial model to prepare a financial projection for each alternative. According to the Trust, the financial model was designed as a planning tool to test the comparative economic implications of the alternatives and not as definitive projections of future financial conditions. Specifically, the financial model was designed to (1) evaluate the short-term financial self- sufficiency of each alternative; (2) estimate the time period needed for each alternative to reach long-term financial sustainability, including generating sufficient revenues to meet long-term capital needs and replacement reserves; and (3) compare the relative financial performance of each alternative against the others. The financial model projected that the Trust could become financially self-sufficient by 2013 under every alternative. In developing the financial model, the Trust relied on historical data from a number of sources, such as the San Francisco area’s real estate markets for data on fair market rental and vacancy rates and national studies for information on capital costs for rehabilitation and new construction. In addition, the Trust made many assumptions in order to project its financial analyses into the future. Some of the key variables included land use, annual program expenditures, and the timing of demolition and rehabilitation of existing buildings. According to our economic analysis of the financial model, the Trust’s assumptions appear to be conservative because they tended to minimize projected revenues. For example, even though the market rental rate in calendar year 2000 for Class B office space in San Francisco was about $60 per square foot, the Trust used a more conservative rental rate of $29 per square foot. This rate was based on the average market rate over the past 7 years. In developing its financial model, the Trust stated that the model was not designed to be a budgetary or accounting tool and the results should not be interpreted as being what will happen, but rather what could happen given certain assumptions. When the Trust finalizes its Presidio Trust Implementation Plan, it needs to refine the financial model to assure itself that the model’s results are based on the latest and best information and assumptions. Also, the model meets the definition of a financial forecast as defined by the American Institute of Certified Public Accountants Statements on Standards for Attestation Engagements which provide a mechanism by which financial forecasts that are expected to be used by a third party can be independently examined. Because it is likely that the public, the Congress, and the Trust will rely on the new financial forecast, at least in part, to judge the Trust’s likelihood of becoming financially self-sufficient by 2013, we believe that the American Institute of Certified Public Accountants guidelines should be applied and the Trust should have the financial model independently examined. We brought this issue to the attention of Trust officials, who stated they were not aware of such guidance but thought that having a new financial forecast independently examined was a good idea. Depending on future rental revenues and how the Trust proceeds with development of the Presidio, it is possible that at some point the Trust may generate revenues in excess of its costs. The Trust acknowledges that, at some point in the future, excess revenues could be generated at which point it could decide to reduce rents, provide subsidies, or scale back plans for building space and capital projects. Public Law 104-333 allows the Trust to retain all proceeds and other revenues it receives. When passing the law that brought the Trust into being, the Congress gave the Trust wide latitude in determining how it would manage and operate the Presidio. The Trust has made notable progress and now stands ready to define the future development and operation of the Presidio as a national park. While the Trust’s financial analysis indicates that the Trust should achieve financial self-sufficiency by 2013, it is only a predictor of what could occur based on several assumptions. The Trust should consider refining its financial forecast once its development plan is finalized. Furthermore, if the Trust generates excess revenues in the future, after achieving financial self-sufficiency and funding capital projects and reserves, the Congress may, at that time, want to revisit the issue of what to do with excess revenues. Given the complexity of the financial model and its importance in the decision-making process and the fact that a refined model could serve as the standard measure of the Trust’s progress toward self-sufficiency, we recommend that the Chairman, Presidio Trust Board of Directors, obtain an independent examination of the financial model as defined by the American Institute of Certified Public Accountants Statements on Standards for Attestation Engagements. A certified public accountant’s report would express an opinion on whether the underlying assumptions provide a reasonable basis for management’s projection of financial self- sufficiency. The Presidio Trust provided oral comments that generally agreed with the report and our recommendation that it have its financial model independently examined when its development plan is finalized. The Presidio Trust also provided a number of technical comments and clarifications, which we have addressed, as appropriate, in the body of the report. We obtained information from the Trust on its activities, reviewed relevant program documents and related materials, and met with Trust officials responsible for major activities, such as facility improvements, residential and commercial leasing, and financial management. We also reviewed the financial model used by the Trust as part of its planning process and discussed the model with Trust officials and officials from the firm that developed the model. We did not independently verify the reliability of the financial data provided nor did we trace the data to the systems from which they came. Because the Trust manages the Presidio in conjunction with the Park Service, we also met with Park Service officials to obtain their views on the Trust’s management of the Presidio and its planning process. We performed our work at the Trust’s headquarters in San Francisco from January 2001 through August 2001 in accordance with generally accepted government auditing standards. We are sending copies of this report to appropriate congressional committees, the Chairman, Board of Directors, Presidio Trust; the Secretary of the Interior; the Director, National Park Service; the Secretary of Defense; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. If you or your staff have any questions regarding this report, please call me or Ed Zadjura on (202) 512-3841. Key contributors to this report are listed in appendix VI. Currently, the Presidio has 1,198 residential housing units, of which 73 percent (869 units) were leased or occupied as of January 2001. The remaining 329 units (27 percent) are either vacant or awaiting rehabilitation. A review of occupied units shows 39 percent (470 units) leased at market rental rates while 33 percent (399 units) have leases below market rates. Overall lease rates for commercial space averages less than $10 per square feet. The majority of total commercial square-feet leased averages just over $3 per square foot. Appendix III: General Overview of Planning Alternatives Considered by the Presidio Trust (July 25, 2001) Under this alternative, tenants and residents would work together to make the Presidio a center for education, communication, and exchange. Open space would be increased primarily by removing non-historic housing in the southern portion of the park. Replacement housing would come primarily from the rehabilitation and reuse of buildings. Cultural and natural resources would be protected and enhanced. This alternative would implement the General Management plan developed by the Park Service in 1994 assuming year 2000 conditions. Tenants and residents would work together to create a global center dedicated to addressing the world’s critical environmental, social, and cultural challenges. Buildings would be removed to increase open space and/or enhance recreational, cultural, and natural resources. In this alternative, more open space would be created in the southern part of the park; development would be concentrated in the northern part of the Presidio. Overall, building square footage would be reduced and open space and natural resource enhancements would be maximized. Under this alternative, the Presidio would become a sustainable live/work community, and a model of environmental sustainability. Emphasis would be placed on creating a community that offered innovative approaches on environmental sustainability. Open space would be enhanced and some non-historic buildings would be removed. Under this alternative, the Presidio would become a national and international cultural destination park, a portal for visitors to the American West and Pacific, and a place of international distinction for its programs in research, education and communication. Open space would be expanded and a substantial number of non-historic buildings would be removed in the southern part of the park; housing would be added in the northern part of the park. Under this alternative, the Presidio would be minimally managed to fulfill the Presidio Trust’s obligations to protect the Presidio’s resources. There would be no significant park enhancements and no physical change beyond those currently underway. There would be no new construction and building removal. Difference between Draft & GMPA 3,660 1,940 5,600 1,070 710 (360) 3,690 1,320 5,010 1,120 170 (950) (30) 620 590 (50) 540 (590) 3,980 1,310 5,290 1,910 1,240 (670) 3,770 1,910 5,680 890 620 (270) 4,070 1,890 5,960 1,370 1,370 0 3,530 2,430 5,960 0 0 0 (1) 1,650 770 (880) (880) 1,660 910 (740) 1,650 1,430 (220) Existing housing units include former military bachelors’ quarters and barracks not in use. Parkwide capital costs, demolition costs, and program capital costs. In addition, Mark Connelly; Robert Crystal; John Kalmar, Jr.; Jonathan S. McMurray, Roderick Moore; Mehrzad Nadji; and Donald Yamada made key contributions to this report.
The Presidio Trust--a wholly owned government corporation--was created in 1996 to manage a large part of the Presidio grounds using sound principles of land use planning and management while maintaining the area's scenic beauty and historic and natural character. The Trust is responsible for leasing, maintaining, rehabilitating, repairing, and improving the property it controls. The Trust must become financially self-sufficient by 2013. GAO found that the Trust has made significant progress in preserving, protecting, and improving the Presidio. It has launched major efforts to repair and upgrade the Presidio's infrastructure and to repair and rehabilitate residential housing and commercial space. So far, the Trust has converted about half of the former military buildings into useable residential and commercial space. The rehabilitation, repair, and leasing of the remaining 300 residential units and about 2.2 million square feet of undeveloped commercial space is critical to the Trust's efforts to achieve financial self-sufficiency. The Trust has also begun several environmental initiatives, including the cleanup of military contamination and the restoration of Mountain Lake--one of the few remaining natural lakes within the San Francisco city limits. The Trust is also working with the Park Service to revitalize vegetation throughout the Presidio and to replace aging trees in the 300-acre forest. The Trust should meet its goal of financial self-sufficiency by 2013, according to financial projections prepared by the Trust.
This year the space shuttle is scheduled to fly its final six missions to deliver hardware, supplies, and an international scientific laboratory to the International Space Station. NASA officials remain confident that the current flight manifest can be accomplished within the given time, and add that should delays occur, the International Space Station can still function. According to NASA, there are trade-offs the agency can make in what it can take up to support and sustain the station. However, failure to complete assembly as currently planned would further reduce the station’s ability to fulfill its research objectives and deprive the station of critical spare parts that only the shuttle can deliver. The recent review completed by the U.S. Human Space Flight Plans Committee included the option of flying the space shuttle through 2011 in order to complete the International Space Station. However, the Committee noted that there are currently no funds in NASA’s budget for additional shuttle flights. Most recently, the Administration is proposing over $600 million in the fiscal year 2011 budget to ensure that the space shuttle can fly its final missions, in case the space shuttle’s schedule slips into fiscal year 2011. Retirement of the shuttle will involve many activities that warrant special attention. These include: disposing of the facilities that no longer are needed while complying with federal, state, and local environmental laws and regulations; ensuring the retention of critical skills within NASA’s workforce and its suppliers; and disposing of over 1 million equipment items. In addition, the total cost of shuttle retirement and transition—to include the disposition of the orbiters themselves—is not readily transparent in NASA’s budget. We have recommended that NASA clearly identify all direct and indirect shuttle transition and retirement costs, including any potential sale proceeds of excess inventory and environmental remediation costs in its future budget requests. NASA provided this information to the House and Senate Appropriations committees in July 2009 but did not identify all indirect shuttle transition and retirement costs in its fiscal year 2010 budget request. We look forward to examining the fiscal year 2011 budget request to determine whether this information is identified. Lastly, NASA has recognized that sustaining the shuttle workforce through the retirement of the shuttle while ensuring that a viable workforce is available to support future activities is a major challenge. We commend NASA for its efforts to understand and mitigate the effect of the space shuttle’s retirement on the civil service and contractor workforce. Nevertheless, how well NASA executes its workforce management plans as they retire the space shuttle will affect the agency’s ability to maintain the skilled workforce to support space exploration. Although it is nearing completion, the International Space Station faces several significant challenges that may impede efforts to maximize utilization of research facilities available onboard. These include: the retirement of the Space Shuttle in 2010 and the loss of its unmatched capacity to move cargo and astronauts to and from the station; the uncertain future for the station beyond 2015; and the limited time available for research due to competing demands for the crew’s time. We have previously reported that the International Space Station will face a significant cargo supply shortfall without the Space Shuttle’s great capacity to deliver cargo to the station and return it to earth. NASA plans on using a mixed fleet of vehicles, including those developed by international partners, to service the space station on an interim basis. However, international partners’ vehicles alone cannot fully satisfy the space station’s cargo resupply needs. Without a domestic cargo resupply capability to augment this mixed fleet approach, NASA faces a 40 metric ton (approximately 88,000 pounds) cargo resupply shortfall between 2010 and 2015. While NASA is sponsoring commercial efforts to develop vehicles capable of carrying cargo to the station and the administration has endorsed this approach, none of those currently in development has been launched into orbit, and the vehicles’ aggressive development schedules leave little room for the unexpected. Furthermore, upon completion of construction, unless the decision is made to extend station operations, NASA has only 5 years to execute a robust research program before the International Space Station is deorbited. The leaves little time to establish a strong utilization program. At present, NASA projects that its share of the International Space Station research facilities will be less than fully utilized by planned NASA research. Specifically, NASA plans to utilize only 48 percent of the racks that accommodate scientific research facilities onboard, with the remainder available for use by others. Congress has directed NASA to take all necessary steps to ensure that the International Space Station remains a viable and productive facility capable of potential utilization through at least 2020. The Administration is proposing in its fiscal year 2011 budget to extend operations of the International Space Station to 2020 or beyond in concert with its international partners. Lastly, NASA faces a significant constraint for science on board the space station because of limited crew time. There can only be six crew members aboard the station at one time due to the number of spaces available in the “lifeboats,” or docked spacecraft that can transport the crew in case of an emergency. As such, crew time cannot presently be increased to meet increased demand. Though available crew time may increase as the six- person crew becomes more experienced with operating the space station efficiently or if the crew volunteers its free time for research, crew time for U.S. research remains a limiting factor. According to NASA officials, potential National Laboratory researchers should design their experiments to be as automated as possible or minimize crew involvement required for their experiments to ensure that they are accepted for flight. We have recommended that NASA implement actions, such as developing a plan to broaden and enhance ongoing outreach to potential users and creating a centralized body to oversee U.S. space station research decision making, including the selection of all U.S. research to be conducted on board and ensuring that all U.S. International Space Station National Laboratory research is meritorious and valid. NASA concurred with our recommendation and is researching the possibility of developing a management body to manage space station research, which would make the International Space Station National Laboratory similar to other national laboratories. NASA projects have produced ground-breaking research and advanced our understanding of the universe. However, one common theme binds most of the projects—they cost more and take longer to develop than planned. As we reported in our recently completed assessment of NASA’s 19 most costly projects—which have a combined life-cycle cost that exceeds $66 billion—the agency’s projects continue to experience cost growth and schedule delays. Ten of the 19 projects, which had there baselines set within the last 3 years, experienced cost growth averaging $121.1 million or 18.7 percent and the average schedule growth was 15 months. For example, the Glory project has recently breached its revised schedule baseline by 16 months and exceeded its development cost baseline by over 14 percent—for a total development cost growth of over 75 percent in just 2 years. Project officials also indicated that recent technical problems could cause additional cost growth. Similarly, the Mars Science Laboratory project is currently seeking reauthorization from Congress after experiencing development cost growth in excess of 30 percent. Many of the other projects we reviewed experienced challenges, including developing new or retrofitting older technologies, stabilizing engineering designs, and managing the performance of contractors and development partners. Our work has consistently shown that reducing these kinds of problems in acquisition programs hinges on developing a sound business case for each project. Such a business case provides for early recognition of challenges, allows managers to take corrective action, and places needed and justifiable projects in a better position to succeed. Product development efforts that have not followed a knowledge-based business case approach have frequently suffered poor cost, schedule, and performance outcomes. A sound business case includes development of firm requirements, mature technologies, a preliminary design, a realistic cost estimate, and sound estimates of available funding and time needed before the projects proceed beyond preliminary design review. If necessary, the project should be delayed until a sound business case, demonstrating the project’s readiness to move forward into product development, is in hand. In particular, two of NASA’s largest projects—Ares I and Orion, which are part of NASA’s Constellation program to return to the moon—face considerable technical, design, and production challenges. NASA is actively addressing these challenges. Both projects, however, still face considerable hurdles to meeting overarching safety and performance requirements, including limiting vibration during launch, mitigating the risk of hitting the launch tower during liftoff, and reducing the mass of the Orion vehicle. In addition, we found that the Constellation program, from the onset, has faced a mismatch between funding and program needs. This finding was reinforced by the Review of U.S. Human Spaceflight Plans Committee, which reported that NASA’s plans for the Constellation program to return to the moon by 2020 are unexecutable without increases to NASA’s current budget. To its credit, NASA has acknowledged that the Constellation program, for example, faces knowledge gaps concerning requirements, technologies, funding, schedule, and other resources. NASA stated that it is working to close these gaps and at the preliminary design review the program will be required to demonstrate that the program and its projects meet all system requirements with acceptable risk and within cost and schedule constraints, and that the program has established a sound business case for proceeding into the implementation phase. Even though NASA has made progress in developing the actual vehicles, the mismatch between resources and requirements remains and the administration’s proposed fiscal year 2011 budget leaves the future of the program in question. NASA has continually struggled to put its financial house in order. GAO and others have reported for years on these efforts. In fact, GAO has made a number of recommendations to address NASA’s financial management challenges. Moreover, the NASA Inspector General has identified financial management as one of NASA’s most serious challeng In a November 2008 report, the Inspector General found continuing weaknesses in NASA’s financial management process and systems, including internal controls over property accounting. It noted that these deficiencies have resulted in disclaimed audits of NASA’s financial statements since fiscal year 2003. The disclaimers were largely attributed to data integrity issues and poor internal controls. NASA has made progress in addressing some of these issues, but the recent disclaimer on the fiscal year 2009 audit shows that more work needs to be done. es. We have also reported that NASA remains vulnerable to disruptions in its information technology network. Information security is a critical consideration for any organization reliant on information technology and especially important for NASA, which depends on a number of key computer systems and communication networks to conduct its work. These networks traverse the Earth and beyond, providing critical two-way communication links between Earth and spacecraft; connections between NASA centers and partners, scientists, and the public; and administrative applications and functions. NASA has made important progress in implementing security controls and aspects of its information security program. However, NASA has not always implemented sufficient controls to protect the confidentiality, integrity, and availability of the information and systems supporting its mission directorates. Specifically, NASA did not consistently implement effective controls to prevent, limit, and detect unauthorized access to its networks and systems. A key reason for these weaknesses is that NASA has not yet fully implemented key activities of its information security program to ensure that controls are appropriately designed and operating effectively. During fiscal years 2007 and 2008, NASA reported 1,120 security incidents that resulted in the installation of malicious software on its systems and unauthorized access to sensitive information. NASA established a Security Operations Center in 2008 to enhance prevention and provide early detection of security incidents and coordinate agency-level information related to its security posture. Nevertheless, the control vulnerabilities and program shortfalls—which GAO identified—collectively increase the risk of unauthorized access to NASA’s sensitive information, as well as inadvertent or deliberate disruption of its system operations and services. They make it possible for intruders, as well as government and contractor employees, to bypass or disable computer access controls and undertake a wide variety of inappropriate or malicious acts. As a result, increased and unnecessary risk exists that sensitive information is subject to unauthorized disclosure, modification, and destruction and that mission operations could be disrupted. GAO has recommended actions the NASA Administrator should take to mitigate control vulnerabilities and fully implement a comprehensive information security program including: developing and implementing comprehensive and physical risk assessments; conducting sufficient or comprehensive security testing and evaluation of all relevant security controls; and implementing an adequate incident detection program. In response to our report, the Deputy Administrator noted that NASA is implementing many of our recommendations as part of an ongoing NASA strategic effort to improve information technology management and information technology security program deficiencies. The Deputy Administrator also stated that NASA will continue to mitigate the information security weaknesses identified in our report. The actions identified by the Deputy Administrator, if effectively implemented, will improve the agency’s information security program. In executing NASA’s space exploration, scientific discovery, and aeronautics research missions, NASA must use its resources as effectively and efficiently as possible because of the severity of the fiscal challenges our nation faces and the wide range of competing national priorities. Establishing a sound business case before a project starts should also better position NASA management to deliver promised capability for the funding it receives. While space development programs are complex and difficult by nature, and most are one-time efforts, the nature of its work should not preclude NASA from being accountable for achieving what it promises when requesting and receiving funds. Congress will also need to do its part to ensure that NASA has the support to hold poorly performing programs accountable in order to provide an environment where the systems portfolio as a whole can succeed with the resources NASA is given. NASA shows a willingness to face these challenges. We look forward to continuing work with NASA to develop tools to enhance the management of acquisitions and agency operations to optimize its investment in space and aeronautics missions. Madam Chairwoman, and Members of the Subcommittee, this concludes my prepared statement. I would be happy to answer any questions you may have at this time. For additional information, please contact Cristina Chaplain at 202-512- 4841 or [email protected]. Individuals making contributions to this testimony include Jim Morrison, Assistant Director; Greg Campbell; Richard A. Cederholm; Shelby S. Oakley; Kristine R. Hassinger; Kenneth E. Patton; Jose A. Ramos; John Warren; and Gregory C. Wilshusen. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Aeronautics and Space Administration (NASA) is in the midst of many changes and one of the most challenging periods in its history. The space shuttle is slated to retire this year, the International Space Station nears completion but remains underutilized, and a new means of human space flight is under development. Most recently, the administration has proposed a new direction for NASA. Amid all this potential change, GAO was asked to review the key issues facing NASA. This testimony focuses on four areas: 1) retiring the space shuttle; 2) utilizing and sustaining the International Space Station; 3) continuing difficulty developing large-scale systems, including the next generation of human spaceflight systems; and 4) continuing weaknesses in financial management and information technology systems. In preparing this statement, GAO relied on completed work. To address some of these challenges, GAO has recommended that NASA: provide greater information on shuttle retirement costs to Congress, take actions aimed at more effective use of the station research facilities, develop business cases for acquisition programs, and improve financial and IT management. NASA concurred with GAO's International Space Station recommendations, and has improved some budgeting and management practices in response. The major challenges NASA faces include: (1) Retiring the Space Shuttle. The impending end of shuttle missions poses challenges to the completion and operation of the International Space Station, and will require NASA to carry out an array of activities to deal with shuttle staff, equipment, and property. This year the shuttle is scheduled to fly its final six missions to deliver hardware, supplies, and an international laboratory to the International Space Station. NASA officials remain confident that the current manifest can be accomplished within the given time, and add that should delays occur, the space station can still function. According to NASA, there are trade-offs the agency can make in what it can take up to support and sustain the station. However, failure to complete assembly would further reduce the station's ability to fulfill its research objectives and short the station of critical spare parts that only the shuttle can currently deliver. Retirement of the shuttle will require disposing of facilities; ensuring the retention of critical skills within NASA's workforce and its suppliers; and disposing of more than 1 million equipment items. (2) Utilizing the International Space Station. The space station, which is nearly complete, faces several significant challenges that may impede efforts to maximize utilization of its research facilities. These include the retirement of the shuttle and the loss of its unmatched capacity to move cargo and astronauts to and from the station; the uncertain future for the station beyond 2015; and the limited time available for research due to competing demands for the crew's time. (3) Developing Systems. A common theme in NASA projects--including the next generation of space flight efforts--is that they cost more and take longer to develop than planned. GAO again found this outcome in a recently completed assessment of NASA's 19 most costly projects--with a combined life-cycle cost of $66 billion. Within the last 3 years, 10 of the 19 projects experienced cost growth averaging $121.1 million or 18.7 percent, and the average schedule growth was 15 months. A number of these projects had experienced considerable cost growth before the most recent baselines were set. (4) Managing Finances and IT. NASA continues to struggle to put its financial house in order. GAO and others have reported for years on these efforts. The NASA Inspector General identified financial management as one of NASA's most serious challenges. In addition, NASA remains vulnerable to disruptions in its information technology network. NASA has made important progress in implementing security controls and aspects of its information security program. However, it has not always implemented sufficient controls to protect information and systems supporting its mission directorates.
DEA is the nation’s federal agency dedicated to drug law enforcement and accordingly, works to disrupt and dismantle the leadership, command, control, and financial infrastructure of major drug-trafficking organizations. DEA’s Office of Operations Management is charged with supporting the domestic drug enforcement activities of DEA’s 21 field divisions, which each have a corresponding field office with a DEA Special Agent in Charge (SAC) assigned. (See fig. 1.) Each field division has Assistant Special Agents in Charge (ASAC) and Group Supervisors who assist the SAC with managing the entire division, including its smaller field offices headed by a Resident Agent in Charge (RAC). Formed in 2003 as a part of the U.S. government’s response to the terrorist attacks on September 11, 2001, ICE’s primary mission is to promote homeland security and public safety through the enforcement of federal laws governing border control, customs, trade, and immigration. ICE’s office of Homeland Security Investigations (HSI) investigates immigration crime; human rights violations and human smuggling; smuggling of narcotics, weapons, and other types of contraband; financial crimes; cybercrime; and export enforcement issues. An important part of ICE’s overall mission, drug-smuggling investigations, made up approximately 22-24 percent— one of the largest investigative categories—of ICE’s total reported investigative hours for fiscal years 2006-2009. Within HSI, the Narcotics and Contraband Smuggling Unit is responsible for overseeing matters related to counternarcotics investigations with a connection to the border, including the implementation of the 2009 Agreement. ICE’s Office of Intelligence and Analysis also has an important role in ensuring that counternarcotics information is shared with the appropriate organizations. ICE has 26 field offices in the United States headed by SACs who are responsible for the administration and management of all investigative and enforcement activities, including counternarcotics, within the geographic boundaries of each office. (See fig.1.) Similar to DEA’s field office leadership, ICE also has ASACs and Group Supervisors to assist with managing the SAC field office and its smaller offices headed by a RAC. Both the 1994 and 2009 Agreements contain key provisions defining ICE’s authority to conduct counternarcotics investigations. ICE is authorized to investigate all immigration and customs violations except those involving narcotics; therefore, ICE has to request Title 21 authority from DEA to investigate counternarcotics cases that have a connection to a border or port of entry. In March 2009, we reported that DEA’s and ICE’s cross-designation procedures under the 1994 Agreement were problematic, due in part to misplaced cross-designation requests and the cap set in conjunction with the 1994 Agreement limiting the number of cross-designated agents to 1,475 agents. According to ICE headquarters officials, this cap limited ICE’s ability to accomplish its mission because agents who were not cross-designated could not pursue border-related drug-smuggling investigations into the United States. We recommended that the Attorney General and the Secretary of Homeland Security develop a new agreement or other mechanism to, among other things, provide efficient procedures for cross-designating ICE agents to conduct counternarcotics investigations. Aligned with our March 2009 report recommendation, the 2009 Agreement changed the process for cross-designating ICE agents with Title 21 authority to investigate counternarcotics cases in that ICE can now select an unlimited number of agents for cross-designation. In addition, a consolidated list of prospective and approved ICE agents is exchanged at the ICE Assistant Secretary and DEA Administrator levels instead of by the field office SACs. The 2009 Agreement retained the 1994 Agreement’s provision requiring the DEA and ICE field office SACs to designate an ASAC as the Title 21 Coordinator to manage Title 21 matters. According to ICE Narcotics and Contraband Smuggling Unit and DEA Operations Management officials, the Title 21 Coordinators are to process and review cross-designation requests at the field office level. The 2009 Agreement also addressed our March 2009 recommendations by specifying that ICE participate fully in and staff the Fusion Center and the Special Operations Division. The Fusion Center is a Justice-led interagency organization that collects and analyzes drug-trafficking and related financial information and disseminates investigative leads. The Special Operations Division is a DEA-led interagency organization established to target the command and control capabilities of major drug- trafficking organizations. The Agreement also requires ICE to: commit fully to sharing all investigative reports and records from open and closed investigations, including those related to drugs, money laundering, bulk cash smuggling and financial crimes, gangs, and weapons; and  provide and, in turn, have access to data related to all seizures of money, drugs, and firearms, including date of seizure, type of contraband, amount, place of seizure, and geographically based data, when known, to the El Paso Intelligence Center (EPIC). This center is a DEA-led tactical intelligence center that provides federal, state, and local law enforcement agencies information they can use in investigations and operations that target drug smuggling and other criminal activities. Additionally, the 2009 Agreement states that “ICE intends to participate and share information to the same extent as other major federal partners in the Fusion Center and Special Operations Division, to include sharing information not yet entered into the shared databases.” The 1994 and 2009 Agreements also address ICE’s (legacy Customs) authority to pursue counternarcotics investigations. According to the 1994 Agreement, ICE’s (legacy Customs) drug-related investigations are “restricted to individuals and organizations involved in the smuggling of controlled substances across U.S. international borders or through Ports of Entry.” In contrast, the 2009 Agreement provides that cross-designated ICE agents will be authorized to investigate narcotics smuggling with a clearly articulable nexus to the United States border or port of entry––only illegal drug importation/exportation schemes, including the activities to transport and stage the drugs within the United States or between the source or destination country and the United States. The Agreement states that an investigation does not have a nexus simply because at one time the narcotics crossed the border or came through a port of entry. The justification for a nexus also cannot be based solely on a buyer who purchased drugs from a cross-border smuggler. In addition, unless authorized as part of a task force or at the request of DEA, ICE agents are not to investigate cases of solely domestic production, sale, transportation, or shipment of narcotics. As we reported in our 2009 report, collaboration and coordination between the two agencies are important because of their overlapping responsibilities and the need to operate across agencies while avoiding duplicative investigations and more importantly, ensuring officer safety. We found in the 2009 report that while senior DEA and ICE officials in certain field offices had established positive working relationships, there was a need to institutionalize consistent practices at all locations. Our report also underscored that positive working relationships between DEA and ICE counterparts did not exist at all locations, resulting in the likelihood of duplication between agencies and incidents that could threaten law enforcement officer safety. The 2009 Agreement addresses how DEA and ICE are to work together to stop the flow of narcotics into the United States by outlining both agencies’ cross-designation, information-sharing, and deconfliction responsibilities. DEA and ICE have taken actions to implement the 2009 Agreement’s provisions. The agencies have implemented the Agreement’s cross- designation provisions through a revised process that is more streamlined and has resulted in enhanced flexibility in maximizing investigative resources, according to ICE Narcotics and Contraband Smuggling Unit officials. ICE has also implemented the Agreement’s Fusion Center and Special Operations Division information-sharing provisions by sharing required data with these organizations. Further, ICE is working to complete the transfer of data to the El Paso Intelligence Center. Additionally, DEA and ICE developed and implemented local deconfliction protocols and used a variety of mechanisms to deconflict counternarcotics investigations in accordance with the Agreement. DEA and ICE headquarters and field office management officials interviewed generally reported that the implementation of the Agreement and local deconfliction protocols had generally improved deconfliction. ICE took actions that fully implemented the cross-designation provisions of the 2009 Agreement, which revised the cross-designation provisions found in the 1994 Agreement, as illustrated in table 1. Specifically, the 2009 Agreement changed the levels at which the requests are processed and approved, from the field office SAC level to the Assistant Secretary (ICE) and Administrator (DEA) level, which has eliminated a step in ICE’s internal review of the requests. Additionally, to comply with the provisions of the 2009 Agreement, ICE and DEA modified the process for making and reviewing requests for cross-designating agents by consolidating the requests into one updated list for approval that ICE submits to DEA twice a year. Accordingly, DEA now provides ICE with one memorandum approving the ICE agents listed for cross- designation authority. Previously, under the 1994 Agreement, ICE and DEA exchanged individual requests and approvals. The cross-designation process was also streamlined in other ways under the 2009 Agreement, as depicted in figure 2. The ICE field offices submit their cross-designation lists to ICE’s Narcotics and Contraband Smuggling Unit and Assistant Secretary in headquarters for review. The ICE Assistant Secretary provides this list to the DEA Administrator for review. DEA Operations Management officials request that the DEA field office SACs then review the list to ensure that the ICE agents listed have counternarcotics duties, such as serving on task forces or other interagency groups that investigate counternarcotics cases. Next, the DEA field offices are to pass the reviewed list back to headquarters for review and approval from DEA’s Administrator. DEA Operations Management officials then provide the final list to ICE’s Narcotics and Contraband Smuggling Unit. DEA approved approximately 3,100 ICE agents for cross-designation authority following the implementation of the 2009 Agreement. According to DEA Operations Management officials, this authority remained in effect until the next update in January 2011. This number was more than double the number before the 2009 Agreement, and thus substantially increased investigative resources available, as discussed further below. Both senior ICE Narcotics and Contraband Smuggling Unit officials and field management officials from two of the eight ICE offices we contacted specifically cited the development of the new component for tracking cross-designation requests as a benefit of the implementation of the 2009 Agreement because the lists can be updated in real time and kept current. A senior official from the ICE Unit reported that the new system is efficient and effective; however, ICE over the long term would like to develop a system to keep records for cross-designation requests that is searchable by agent name, among other items, and can accommodate the entry of comments on the agents requesting this authority. More importantly, according to these same officials, the type of system envisioned would also allow ICE headquarters to develop an archive that captures a record of the agents cross-designated during a specific time frame, which will assist with internal audits. ICE Unit officials reported that they have not yet developed this system due to resource constraints. Both senior DEA and ICE headquarters’ officials stated that the 2009 Agreement’s cross-designation provisions are much less bureaucratic and ICE officials report that the process has enhanced their flexibility in using investigative resources. DEA Operations and Management officials reported that the cross-designation process is more efficient since DEA now reviews one consolidated list of agents instead of numerous individual requests. ICE’s Deputy Director reported that implementing the 2009 Agreement’s cross-designation provision has been a major improvement to the process conducted under the 1994 Agreement. An official from the Unit explained that the previous cross-designation process was cumbersome for ICE agents because the paperwork for requesting Title 21 authority was complicated and time consuming. Moreover, these same officials reported that the list of cross-designated agents in each field office was continually out of date because the approval process took a long time and there was no mechanism in place to update the lists in real time. Similarly, management officials from seven of the eight ICE field offices we contacted also reported that the 2009 Agreement had somewhat to greatly improved the cross-designation process. ICE officials from these seven offices reported that in addition to the streamlined process, the lack of a limit on the number of cross-designated agents permitted field offices to increase the number of cross-designated agents available to investigate counternarcotics cases. One office reported that it even doubled its number of cross-designated agents. Moreover, the lack of a cap on the number of cross-designated agents and the subsequent increase has enhanced ICE’s allocation of investigative resources for counternarcotics cases. In our prior work, ICE officials reported that the cap limited ICE’s ability to accomplish its mission because agents who were not cross-designated could not pursue border-related drug- smuggling investigations into the United States. For instance, one ICE field office management official in a smaller interior office reported that the former cap was difficult because this office had a limited number of cross-designated agents and if one or two of these agents were out of the office, he did not have cross-designated agents for investigating counternarcotics cases. Similarly, another ICE field office management official reported that the 2009 Agreement allowed his office to have a pool of agents authorized to work counternarcotics investigations. Views from DEA management officials from the offices we contacted were mixed regarding the impact of the 2009 Agreement on the cross-designation process. In four of the eight DEA field offices that we contacted, management officials reported that the Agreement had neither improved nor worsened the cross-designation process. Additionally, in two of the eight DEA field offices, the management officials reported that the Agreement had somewhat to greatly improved the process. DEA and ICE also made revisions to the cross-designation process after the 2009 Agreement was implemented to respond to concerns from both agencies. Specifically, DEA and ICE were concerned with how to accommodate cross-designation needs that occur between the reviews of the master lists that occur every 2 years. ICE officials stated that they would prefer that DEA review the cross-designation list every 90 days instead of every 2 years to capture these out-of-cycle requests. According to an official from ICE’s Narcotics and Contraband Smuggling Unit, the ability of ICE to request cross-designation authority for agents more often than every 2 years is important. An agent working a nonnarcotics investigation, such as a bulk cash smuggling case, may over the course of the investigation require Title 21 authority to continue to pursue the narcotics component. Further, this official stated that the ability to more frequently request cross-designation authority, or in emergency situations, is particularly important for ICE’s smaller field offices, which may only have one or two agents cross-designated but may need more agents with this authority for a particular investigation. Senior DEA Operations Management officials recognized that the frequent movement of ICE agents among the field offices over the course of 2 years justifies reviewing the master list more frequently. These officials explained that the most recent master list from ICE, which DEA reviewed, had more than 3,000 names. Reviewing updates to the list twice a year, instead of a master list every 2 years will, according to these officials, ease the review process for DEA because they will review a smaller update and only make changes in their database for those agents needing updates. Negotiations regarding more frequent reviews concluded in 2010 and resulted in specific guidance that: ICE is to submit a master list of ICE agents requesting cross- designation authority to DEA every 2 years, but, ICE is to also transmit updates to the master list to DEA by January 5 and June 5 of each year.  DEA is to notify ICE by January 20 and June 20 of the status of these requests. According to senior DEA officials, they notified ICE that several of the agents listed on the first list ICE submitted in January 2011 as a result of the negotiations were no longer with the corresponding office or agency. ICE reevaluated the list and provided a revised master list to DEA, which DEA reviewed for approval of cross-designation authority, and sent back to ICE for revisions. As of June 30, 2011, ICE had finalized the list and was preparing to send it to DEA. The negotiations also specified that the cross-designation authority of the ICE agent is effective from the date DEA notifies ICE’s Narcotics and Contraband Smuggling Unit of approval, which addresses a concern raised by an ICE official from this unit regarding ambiguity over the official date of effectiveness. In addition, DEA and ICE also addressed the process for handling emergency cross-designation requests that occur between the twice yearly updates during these negotiations. If the need arises to immediately cross-designate an ICE agent, due to developments in an investigation or the reassignment of the agent, the ICE field office Title 21 Coordinator is to send an e-mail, including a justification and an anticipated time limit for the request, to ICE’s Narcotics and Contraband Smuggling Unit. DEA headquarters reviews the request, if possible, within a few hours or days and, if justified, approves cross-designation authority for a maximum of 60 days. As we recommended in March 2009 and as required by the subsequent 2009 Agreement, ICE has implemented the information-sharing provisions and is fully participating in DOJ’s Fusion Center and Special Operations Division. ICE is also taking action to address the information- sharing provision regarding the sharing of drug seizure, currency, and firearms data with the El Paso Intelligence Center. Table 2 shows the specific information-sharing provisions of the 2009 Agreement and the actions ICE has taken to become a full partner in these organizations. Both senior DOJ Fusion Center and DEA Special Operations Division officials reported that ICE is now a full partner in each of their respective organizations. Senior Fusion Center officials reported that ICE completed its data transfer ahead of schedule and is now sharing all required data with DOJ’s Fusion Center. Senior DEA and Special Operations Division officials reported that ICE’s information sharing with DOJ’s Division has improved since the 2009 Agreement. These same officials reported that ICE increased the number of records it provided to the Division by approximately 51 percent from fiscal years 2009 through 2010. Additionally, according to senior DEA and Division officials, ICE has filled all of its fiscal year 2010 position vacancies and received six additional positions for fiscal year 2011: three special agents and three intelligence research analysts. ICE has filled three of these six positions and is in the process of filling the remaining three vacancies. ICE is taking steps to complete the implementation of the information- sharing provision in the 2009 Agreement regarding the sharing of information, particularly drug seizure, currency, and firearms data, with the El Paso Intelligence Center. However, ICE has not yet transferred all of the data required by the provision to the Center. The Center depends on data from its law enforcement partners for informing, coordinating, and deconflicting counternarcotics investigations. A senior Homeland Security Investigations official reported that ICE shares information with the Center and has taken steps as described in table 2 to complete its implementation of this provision. According to El Paso Intelligence Center and ICE Homeland Security Investigations officials, ICE is working with the Center to complete the data transfer by fall 2011. ICE Homeland Security Investigations and Center officials reported that ICE has provided over 5 million records to the Center. Center officials requested that ICE provide additional follow- up data regarding Federal Drug Identification Numbers, which are unique identification numbers that EPIC assigns to each drug seizure that meets a certain threshold. According to Center officials, these data, which ICE failed to provide in the past, will assist them with determining duplicative records in the data set. ICE Homeland Security Investigations officials reported that ICE has provided the Center with Federal Drug Identification Number data from October 2000 to December 2010 and will include these data with all future transfers. Although the staffing of ICE personnel at the El Paso Intelligence Center is not required by the 2009 Agreement, Center officials reported that ICE has a total of 11 available positions at the Center. ICE filled 7 of these positions but has 4 vacancies, 2 of which recently occurred under emergency situations. Officials from ICE’s Office of Intelligence and Analysis reported that these vacancies will be filled by the end of fiscal year 2011. The 2009 Agreement established that ICE’s cross-designated agents have the authority to investigate narcotics smuggling with a clearly articulable nexus to the border, and states that deconfliction is paramount and mandatory in investigations. Agencies deconflict to (1) ensure officer safety and (2) prevent one agency’s law enforcement activity from compromising the other agency’s ongoing investigation because agencies invest extensive time and resources in sophisticated law enforcement operations. Specifically, the 2009 Agreement set up a two-part deconfliction process that required (1) ICE to notify DEA of all counternarcotics investigations and DEA to notify ICE when DEA uncovers nondrug violations that fall under ICE’s mission (e.g., alien smuggling or human trafficking), and (2) DEA and ICE to use various mechanisms to deconflict counternarcotics enforcement operations locally, as appropriate. As part of deconfliction, the Agreement encouraged DEA and ICE field offices to participate in joint investigations and also set the expectation that operational deconfliction issues were to be resolved at the lowest level of authority (i.e., Group Supervisor). Additionally, the Agreement established a headquarters entity to resolve those issues that could not be resolved in the field. Table 3 presents the deconfliction provisions of the 2009 Agreement and summarizes the actions ICE and DEA took to implement them. After the Acting DEA Administrator and Assistant Secretary for ICE signed the Interagency Agreement in June 2009, DEA and ICE headquarters directed their respective field offices to develop local deconfliction protocols to implement the Agreement. DEA and ICE headquarters provided a template to be used by corresponding DEA and ICE offices to develop their respective local protocol. The template sets out general requirements for the Title 21 Coordinator position and deconfliction of operational activities. As a result, SACs covering the 21 DEA and 26 ICE field offices signed 28 protocols in 2010. (See app. II.) The local deconfliction protocols established specific policies applicable to the geographic area (e.g., states) covered by the protocol. Each local protocol described the process and mechanisms to be used for notification and deconfliction, detailing the types of information to be provided. The protocols also addressed particular issues or needs of the local DEA and ICE offices. These topics included the identification of the respective Title 21 coordinators; designation of points of contact in addition to the ASAC/Title 21 Coordinator when the protocol covered a large geographic area or multiple offices (e.g., protocols that included more than one state); invitation to DEA to participate in ICE counternarcotics investigations; participation in each others’ groups and task forces within the geographic area; and the process for coordinating investigative activities with other federal agencies, such as U.S. Customs and Border Protection. To implement the two-part deconfliction process established by the 2009 Agreement, DEA and ICE field offices we contacted reported using a variety of mechanisms. First, to implement the notification provisions, the eight DEA and eight ICE field offices reported using different mechanisms, such as through a worksheet or e-mail. In addition to the usual practice of deconflicting at the lowest levels, notification is to occur through the Title 21 Coordinators, who are responsible for ensuring that coordination and deconfliction take place. For example, ICE management in one of the offices stated that its office completes a deconfliction worksheet when it initiates a counternarcotics investigation or activity (e.g., warrants, undercover operations, or a controlled delivery of drugs) and sends the worksheet to the DEA Group Supervisor and DEA and ICE Title 21 Coordinators. The worksheet provides information on the investigation (e.g., the violation, suspect, and ICE office carrying out the investigation, among other things). Using this information, DEA is to determine (1) any overlap between this investigation and any of DEA’s investigations and (2) whether or not DEA will provide assistance and, if so, the type of assistance. DEA sends its response to the ICE Group Supervisor and Title 21 Coordinator. Second, DEA and ICE field offices also reported using a variety of deconfliction mechanisms, including processing information through local deconfliction center databases, sharing plans for conducting law enforcement actions, and processing information through the Special Operations Division database to deconflict operations and targets of investigations. Because of differences in the availability of mechanisms, DEA and ICE offices may use different mechanisms to deconflict law enforcement actions. For example, DEA and ICE offices without access to a local deconfliction center may share plans or communicate by telephone to deconflict operations. Additionally, according to DEA and ICE headquarters officials, because different databases contain different information, DEA and ICE use different databases to deconflict (1) law enforcement operations and (2) targets of investigations. For example, to deconflict an operation (e.g., an arrest) to prevent potential “blue on blue” situations, DEA and ICE offices may use the local High Intensity Drug Trafficking Area (HIDTA) database to determine whether another law enforcement agency is conducting an action at the same location or vicinity. To deconflict the targets of an investigation, information may be processed through the Special Operations Division to identify any connections to another agency’s investigation (e.g., the same phone number, name, or address). Specifically, the protocols for the eight corresponding DEA and ICE field offices that we contacted identified the local deconfliction center that was to be used to deconflict counternarcotics investigations in the respective area of operation. DEA and ICE field offices generally reported entering information (e.g., names, phone numbers, addresses, or location of the operation) into the local center’s database to identify other agencies’ operations that might conflict and lead to a “blue on blue” situation. In instances of a potential conflict, the systems usually generate contact information, notifying both agencies, so that the operations may be deconflicted. However, within the geographic area encompassed under a protocol, not all suboffices had access to a HIDTA database; consequently, these offices relied on other mechanisms to deconflict. For example, in a rural area or large state with a small population, an ICE agent might contact the local DEA office by phone, and vice versa. Field management or first-line supervisors in other ICE offices reported that they deconflicted by providing DEA with plans for a law enforcement activity. First-line supervisors in the eight DEA and eight ICE field offices reported that deconfliction usually took place at the lowest level—agent to agent, Group Supervisor to Group Supervisor, but no higher than the ASAC/Title 21 Coordinator. Any problems in deconfliction were elevated up the chain of command from the agent to the Group Supervisor, but generally not beyond the ASAC, and rarely to the SAC. In addition to the 2009 Agreement, DEA and ICE field office management and first-line supervisors identified other factors, such as colocation and task forces, that enhanced coordination and deconfliction between DEA and ICE in the field offices. Appendix II provides information on other factors that support deconfliction and incentives to pursue joint investigations identified by the eight DEA and eight ICE field offices we contacted. Field management in the DEA and ICE offices we contacted generally said that DEA and ICE had been taking actions to deconflict law enforcement actions locally prior to the implementation of the 2009 Agreement and local protocols. However, management officials in 12 of the 16 DEA and ICE offices also reported that the 2009 Agreement and local deconfliction protocols had generally improved local deconfliction by mandating deconfliction and as a result better ensuring officer safety and maximizing resources. Specifically, in all of the DEA offices, field management said that the 2009 Agreement and local protocols had somewhat improved or greatly improved the deconfliction process, finding the mandatory deconfliction provision to be helpful. They said that the process had improved because the Agreement and local protocols (1) reminded ICE to notify DEA of ICE counternarcotics investigations, (2) brought more attention to the tools available for deconflicting DEA and ICE investigations, and (3) mandated deconfliction, which resulted in agents following the established policy. Moreover, among the eight DEA offices we contacted, first-line supervisors in five offices generally reported that the deconfliction process had somewhat improved and those in the remaining three offices generally reported that it had neither improved nor worsened. Specifically, supervisors from these three offices explained that prior to the Agreement, relationships between DEA and ICE had been good and such positive relationships continued. Field management officials in four of eight ICE offices responded that the 2009 Agreement and local protocols had somewhat or greatly improved the deconfliction process and officials from the remaining four offices responded that it had neither improved nor worsened. Among the offices reporting that the process had somewhat improved, officials from these offices explained that the Agreement and local protocol had created a common understanding between DEA and ICE as to what was expected by way of deconfliction, rather than leaving deconfliction to agents’ good will. Moreover, officials said that the Agreement and protocol had gone a long way toward protecting officer safety by leaving little room for interpretation about when and how DEA and ICE were to deconflict. Among the ICE offices reporting that deconfliction had neither improved nor worsened, they said that prior to the implementation of the Agreement local DEA and ICE relationships had been good, local deconfliction centers had been working well, or there were no local challenges. First- line supervisors in seven of eight ICE offices generally reported that the deconfliction process had neither improved nor worsened. Even with generally reported improvement in deconfliction between DEA and ICE as a result of the two-part deconfliction process provided for in the 2009 Agreement, ICE Narcotics and Contraband Unit officials saw the need to expand notification by DEA into two areas not covered by the Agreement. First, these officials said that DEA’s providing notification to ICE of DEA counternarcotics investigations on the border would be helpful. Similarly, ICE management officials and first-line supervisors in three of the eight ICE field offices and supervisors in a fourth office generally believed that DEA should notify ICE of counternarcotics investigations that might overlap with an ICE investigation, such as those along the border, to avoid possible “blue on blue” situations as well as duplication of effort. For example, according to ICE management in one office, ICE may be investigating an organization that is moving people north or guns south across the southern border, while DEA is investigating the drug-trafficking activities of the same organization. Second, ICE Narcotics and Contraband Unit officials said that DEA notification of ICE of DEA financial investigations of bulk cash also would be helpful. However, DEA headquarters officials believed that notification issues had been resolved during the negotiation of the 2009 Agreement and underscored that ICE officials had not raised these issues at the May 31, 2011, meeting of the HRT. Specifically, with regard to DEA notifying ICE of DEA bulk cash investigations, DEA headquarters officials said that the issue had been resolved during the negotiation of the 2009 Agreement. They explained that, at that time, DEA and ICE officials had agreed not to require DEA to notify ICE of DEA counternarcotics investigations involving quantities of currency or currency equivalents (e.g., gold, natural resources, real estate, precious gems), because it would be too burdensome to DEA as most DEA counternarcotics investigations involve drug proceeds., DEA headquarters officials said that it was not feasible for DEA to notify ICE of every investigation involving drug proceeds, but observed that the expansion of ICE’s bulk cash section at the El Paso Intelligence Center and the information available through the section might help to mitigate this situation. ICE’s Office of Intelligence and Analysis officials told us that ICE’s Bulk Cash Smuggling Center, located in Vermont, was working with the El Paso Intelligence Center to establish a Bulk Cash Smuggling Center Intake and Analysis Section at the El Paso Intelligence Center. These officials and El Paso Intelligence Center officials said that as of June 2011, they were finalizing the protocols that will be used. Additionally, ICE’s Assistant Director for Operations stated that the bulk cash section at the El Paso Intelligence Center will help deconflict and share information with state and local law enforcement and eventually provide a useful repository of bulk cash information. Furthermore, ICE’s Assistant Director for Operations stated that during the HRT meeting, ICE did not raise the issue of DEA notifying ICE of DEA counternarcotics investigations, specifically those involving currency or currency equivalents. According to this official, ICE management has evaluated this issue and concluded that such notification was not necessary so long as DEA agents use the local deconfliction centers and ICE agents continue to communicate with DEA locally. She said that the existing and newly established deconfliction mechanisms are sufficient to provide notification. ICE’s Assistant Director for Operations also said that all SACs are aware that ICE is proceeding with the implementation of the 2009 Agreement and at the next leadership conference, ICE headquarters will reaffirm this message. DEA and ICE agreed that no modifications to the Agreement will be pursued at this time and issues raised by either agency will be resolved as their relationship develops. She further stated that if deconfliction mechanisms are found to be insufficient in the future, ICE would pursue discussions with DEA regarding notification. According to our March 2009 report, lack of parameters for what constitutes a border or port-of-entry smuggling operation hindered collaboration between DEA and ICE. The 2009 Agreement sought to clarify the parameters of ICE’s Title 21 authority and what constitutes a clearly articulable nexus to the border by specifying that an investigation does not have a nexus simply because at one time the narcotics crossed the border or came through a port of entry or solely because a buyer purchased drugs from a cross-border smuggler. Regarding the extent to which the Agreement affected the understanding of the nexus to the border among DEA and ICE agents, the DEA and ICE offices we contacted varied in their responses. Specifically, field management from five of eight DEA offices reported that the understanding had somewhat or greatly improved, and in the remaining offices, officials from two reported that the understanding neither improved nor worsened and one reported it somewhat worsened. For example, officials in one of the five DEA offices explained that the determination of the nexus begins with ICE providing DEA with a plan for an operation, which prompts discussions between DEA and ICE agents and first-line supervisors regarding ICE’s role in the investigations. These officials reported that in their respective areas of operation, they had no issues in determining nexus. In the one DEA office, officials—both management and first-line supervisors—reported that the understanding of the nexus had somewhat worsened because of the way in which their ICE counterparts were interpreting the nexus, despite its being defined in the Agreement. (See below.) In contrast, ICE management in all eight of the offices contacted responded that the understanding of the nexus had neither improved nor worsened. However, the explanation for the responses varied, including (1) the 2009 Agreement had not changed the meaning of the nexus; (2) the nexus is determined on a case-by-case basis; (3) the local DEA office had never raised questions about ICE’s determination of a nexus or, if it did, the DEA and ICE Group Supervisors or Title 21 Coordinators resolved the issue quickly. In terms of first-line supervisors, all ICE and five of eight DEA offices generally reported that the Agreement and local protocols had neither improved nor worsened the understanding of the nexus to the border. The supervisors in the remaining three DEA offices generally reported that the understanding of nexus had improved. Although DEA and ICE officials said there was a general understanding of the nexus, officials from DEA and ICE field offices identified two issues that still posed issues to agents determining nexus to the border in some offices. First, field management in three of the eight DEA offices, as well as first-line supervisors in two of eight offices, for a total of four DEA offices, said they were confused about the corresponding ICE office’s interpretation of the nexus in some cases. They raised the concern that ICE agents were not following the Agreement by stating that a particular investigation has a nexus to the border because “all drugs come across the border,” thereby allowing ICE to pursue the investigation. The continued misunderstanding of the nexus was underscored when an ICE Group Supervisor from one of the corresponding ICE offices explained to us that it was easy to articulate a nexus to the border and justify ICE’s pursuing a counternarcotics investigation, because cocaine, marijuana, and heroin are not grown in the United States but come from across the border. However, other ICE supervisors participating in the interview said that solely the fact that drugs come over the border was not adequate justification to show a nexus, but other factors, such as the individual’s involvement in an international trafficking organization, had to be considered. Similarly, an ICE headquarters Narcotics and Contraband Smuggling Unit official said that ICE’s counternarcotics investigations are to involve the staging, transporting, importing, or exporting of drugs and the argument that all drugs cross the border was not sufficient to show the nexus for such an investigation. Second, first-line supervisors in two of eight ICE offices said that the involvement of foreign nationals in drug trafficking should be justification for ICE to pursue a counternarcotics investigation in light of ICE’s immigration responsibilities. However, field management in the corresponding DEA offices raised the concern that this interpretation of the nexus led to ICE’s investigation of domestic drug cases (e.g., alien criminal gangs selling drugs retail within the United States), which is not in compliance with the Agreement. ICE headquarters Narcotics and Contraband Smuggling Unit officials told us that for ICE to be able to pursue a Title 21 counternarcotics investigation, it was not sufficient that criminal aliens were selling drugs; the criminal aliens had to be involved in staging or smuggling drugs. If the drug activity is domestic, then ICE is to turn the investigation over to DEA. Both DEA and ICE headquarters officials said that issues related to nexus were generally being handled by field office managers at the local level. ICE’s Assistant Director for Operations said that the nexus issue (1) had not been elevated to her level within ICE and (2) DEA had not raised it at the HRT. This official said that she spoke with the SACs regularly and they had raised the nexus issue in very few circumstances. She explained that nexus issues associated with a specific case are resolved at the local level. Similarly, ICE headquarters HSI officials said that resolving nexus issues between DEA and ICE is a local leadership issue, not a matter for ICE headquarters. Furthermore, they said it was not possible to craft guidance that specifically defines what a nexus is and lays out what DEA and ICE investigates. At headquarters, DEA senior officials said that they believed that ICE headquarters officials understood the meaning of a “clearly articulable nexus to the border” and corresponding DEA and ICE field offices were generally not experiencing issues, although individual ICE agents may misinterpret the nexus to pursue a particular investigation. These officials said that if ICE follows the language in the 2009 Agreement, there is no problem. According to DEA and ICE officials, the agencies have each primarily used established processes to monitor the implementation of the 2009 Agreement. DEA and ICE conducted ongoing monitoring of implementation and did not identify any systemic implementation issues. According to DEA and ICE headquarters officials, the May 2011 meeting of the HRT constituted a review of the 2009 Agreement and affirmed that there were no overarching or systemic issues of coordination or deconfliction requiring headquarters-level intervention. DEA and ICE headquarters officials told us that during the first year of the 2009 Agreement they conducted ongoing monitoring through agency management and supervisory activities, as well as through other oversight mechanisms (e.g., e-mails and phone calls) to identify any systemic implementation issues. Such ongoing monitoring is consistent with internal control standards, which call for ongoing monitoring to occur in the course of normal operations (i.e., regular management and supervisory activities). Specifically, DEA headquarters officials told us that they had continuously monitored the implementation of the 2009 Agreement during the first year of implementation because they found this approach to be advantageous. To ensure that the 2009 Agreement was being properly implemented, monitoring was conducted at various levels of DEA. According to DEA headquarters officials, the DEA Administrator and ICE’s Assistant Secretary met regularly, as did the DEA and ICE Deputy Chiefs of Operations. DEA headquarters routinely coordinated with field offices, as well as ICE headquarters, to ascertain how the Agreement was working in the field. DEA’s Deputy Operations Chief spoke with the SACs on a regular basis to help monitor implementation of the Agreement. Similarly, ICE’s Deputy Director said that monitoring activities occurred both internal to ICE and in conjunction with DEA. For instance, both ICE headquarters and field management said that they discussed the implementation of the 2009 Agreement with their DEA counterparts and the DEA and ICE Title 21 ASACs held regular meetings. Additionally, ICE headquarters’ Narcotics and Contraband Smuggling Unit established a Title 21 e-mail box to solicit feedback from the field offices on the implementation of the 2009 Agreement, but as of June 2011, the Unit Chief said that he had not received any systemic complaints through this mechanism. DEA and ICE field office management and first-line supervisors in the 16 offices said that the usual mechanism for providing feedback on field issues is through the supervisory chain of command––agent to Group Supervisor to ASAC to SAC. DEA and ICE first-line supervisors said that they usually resolved issues at the case agent, Group Supervisor, and ASAC levels, but infrequently elevated them to the SAC level because they were usually able to resolve them at the lower levels. In the 16 offices, supervisors said that they would use the chain of command to raise issues or provide feedback. For example, if their agents raised issues regarding the 2009 Agreement, they would provide this feedback to their appointed Title 21 Coordinator. Similarly, DEA and ICE ASACs reported using their chain of command to report any issues with the 2009 Agreement and Title 21 matters. Field management officials in the 8 ICE offices reported being asked by ICE headquarters whether they had any problems or issues regarding the implementation of the Agreement. In the 8 DEA offices, field management officials said that issues with the Agreement would be passed to DEA headquarters through the chain of command. DEA and ICE headquarters officials said that, as of June 2011, the agencies had not identified any problems affecting the implementation of the 2009 Agreement which could not be addressed through the chain of command. With the exception of the issues leading to the modification of the procedures for updating the cross-designation list, previously discussed, DEA and ICE headquarters and field office officials interviewed reported that no issues affecting the implementation of the 2009 Agreement had been raised through its monitoring mechanisms. Additionally, they said that no issues had been raised that required the intervention of the HRT to resolve. DEA and ICE field management and first-line supervisors in all 16 of the offices we contacted reported that they had provided no substantial feedback on concerns regarding the 2009 Agreement to their respective headquarters. Moreover, field management in 7 of the 8 DEA field offices and 5 of the 8 ICE field offices specifically stated that they had not forwarded any issues up the chain of command. Our review substantiated two issues that were raised through the chain of command and were resolved without convening the HRT. The first, as previously discussed, was that DEA and ICE negotiated and agreed to modifications to the process for cross-designating ICE agents established under the 2009 Agreement. The second issue involved DEA field office concerns regarding what they believed to be an expansion of ICE’s role in investigating drug-related bulk cash cases at international airports (e.g., following the cash on flights within the United States and beyond the border), which had been raised through the DEA chain of command. The DEA Deputy Operations Chief confirmed that he had received phone calls from DEA field offices regarding this matter, discussed the issue with his ICE counterpart, and identified the source of the confusion to be an ICE draft policy that several ICE field offices had implemented. After resolving the issue with ICE headquarters, he said that he contacted the DEA offices involved and provided the correct information. He said that DEA field management in the offices that had raised the issue then worked with their ICE counterparts to resolve the issue locally. However, as previously discussed, DEA and ICE field management and first-line supervisors identified two issues that continued after the implementation of the Agreement and had not been resolved through existing mechanisms. First, management and first-line supervisors in three of the eight ICE field offices and first-line supervisors in a fourth office generally believed that DEA should notify ICE of DEA counternarcotics investigations that might overlap with an ICE investigation, such as those along the border, and investigations involving bulk cash. As previously discussed, DEA believed that the notification issue had been resolved during the negotiation of the Agreement and noted that ICE had not raised the issue at the HRT. ICE’s Assistant Director for Operations confirmed that ICE did not raise the notification issue having concluded that such notification was not necessary so long as DEA agents use the local deconfliction centers and ICE agents continue to communicate with DEA locally. She said that the existing and newly established deconfliction mechanisms are sufficient to provide notification. Second, management or first-line supervisors in four of eight DEA field offices said that ICE agents in their area continued to experience challenges regarding the determination of a nexus to the border. DEA and ICE headquarters officials believed that the Agreement clearly defined what constituted a nexus to the border. Specifically, these DEA officials said that if ICE follows the language in the 2009 Agreement, there is no problem that could not be resolved in the field. ICE’s Assistant Director for Operations said that the nexus issue (1) had not been elevated to her level within ICE and (2) DEA had not raised it at the HRT. This official explained that nexus issues associated with specific cases are resolved in the field. In May 2011, subsequent to our interviews, ICE headquarters sent out an 11-question survey seeking input from ICE field office SACs as well as ICE Attachés, assigned to international offices, in an effort to measure the effectiveness of the Agreement. The survey questions addressed the four core areas of the Agreement: information sharing between ICE and DEA; ICE Title 21 authority; Title 21 deconfliction and operational coordination; and foreign investigations. Respondents were asked to answer the questions using a 5-point scale from strongly agree to strongly disagree and provide a brief statement and specific examples, if possible, if they responded disagreed or strongly disagreed. ICE’s Deputy Director told us that such a survey could be helpful in determining how well the Agreement was working in field offices, especially before a meeting of the HRT. According to ICE, the SACs surveyed generally reported that the 2009 Agreement had helped to define and streamline the cross-designation process, but the effect of the Agreement on deconfliction between DEA and ICE was negligible because local protocols existed prior to the implementation of the Agreement. Additionally, while all SACs reported that the Agreement had formalized the information-sharing process, some SACs expressed frustration over DEA not having to share information with ICE to the same extent as ICE was required to share information with DEA. That is, according to ICE’s Assistant Director for Operations, the survey identified the same issue raised during our ICE field office interviews, as previously discussed––DEA providing notification to ICE of DEA counternarcotics investigations that might overlap with an ICE investigation, such as those along the border, and investigations involving bulk cash. However, according to ICE, the purpose of the Agreement was to delegate DEA’s Title 21 enforcement authorities to ICE, which requires information sharing on ICE’s behalf, not to address overall information sharing between DEA and ICE. Furthermore ICE’s Assistant Director for Operations told us that ICE leadership did not believe that the issue was pervasive enough to change the provisions of the 2009 Agreement and ICE headquarters will communicate this position at its next leadership conference. Additionally, this official said that respondents did not raise any issues regarding the interpretation of the nexus to the border. Similarly, ICE managers we interviewed did not identify issues regarding the interpretation of the nexus, although our review identified DEA field office concerns about ICE’s interpretation of the nexus in some cases. In our March 2009 report, we recommended that the Attorney General and the Secretary of Homeland Security develop processes for periodically monitoring the implementation of the new MOU or other mechanism to be established between DEA and ICE, and make any needed adjustments. The 2009 Agreement provides for monitoring through the HRT, which is to periodically review the performance of the Agreement. The Agreement called for the HRT to review the performance of the Agreement after 1 year and then every 2 years, or at any time, upon a written request by either DEA or ICE. The reviews are to be the joint responsibility of the DEA Chief of Operations and ICE Director of Investigations, or their designees, and both DEA and ICE are to cooperate in resolving any issues and executing any necessary or appropriate modifications to the 2009 Agreement. Monitoring through the HRT is consistent with internal control standards, which state that separate evaluations of control, in addition to ongoing monitoring, can also be useful, but their scope and frequency should depend primarily on the assessment of risks and the effectiveness of ongoing monitoring procedures. According to DEA and ICE headquarters officials, the HRT convened on May 31, 2011. ICE officials said that DEA and ICE met at that time because it was one of the first available dates the senior level officials from both agencies had sufficient time in their schedules to convene an HRT. Up to that time no deconfliction or coordination issues requiring the HRT to intervene had been raised. The HRT meeting affirmed that there were no overarching or systemic issues of coordination or deconfliction which would require headquarters-level resolution. DEA and ICE agreed to keep the lines of communication open between them to promptly resolve any systemic issues which could not be reconciled at the lowest possible level. Additionally, the HRT meeting also constituted a review of the 2009 Agreement. The participants agreed that future reviews would be conducted in accordance with the time line outlined in the Agreement. Accordingly, the next review is to occur in 2013, provided there are no requests from either agency to conduct a review in the interim. We requested comments on a draft of this report from DOJ and DHS. DOJ did not provide official written comments to include in our report. However, in an e-mail received on July 19, 2011, the DEA liaison stated that DEA appreciated the opportunity for review and comment on the draft report and DEA had no further comments. DEA provided written technical comments, which we incorporated as appropriate. We received written comments from DHS on the draft report, which are reproduced in full in appendix IV. DHS thanked GAO for the opportunity to review and comment on the draft report and stated its appreciation for GAO’s work in planning and conducting its review and issuing the report. DHS further noted the report’s positive acknowledgment that ICE and DEA have taken actions to enhance interagency collaboration in combating narcotics trafficking, and that these actions have, in part, helped maximize resources available to conduct counternarcotics investigations. Additionally, ICE provided written technical comments, which we incorporated as appropriate. We are sending copies of this report to the Attorney General and Secretary of Homeland Security. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact Eileen Larence at (202) 512-6510 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our overall objective was to assess the status of the implementation of the 2009 Interagency Cooperation Agreement (2009 Agreement) between the Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE) on counternarcotics investigations. This report addresses the following questions: 1. To what extent have DEA and ICE taken actions in their respective domestic offices to implement the provisions of the 2009 Agreement addressing the cross-designation of ICE agents to pursue counternarcotics investigations, information sharing, and deconfliction of counternarcotics investigations? 2. To what extent have DEA and ICE taken actions to monitor the implementation of the 2009 Agreement in their respective domestic offices, and make needed adjustments? To assess the extent to which DEA and ICE have taken actions to implement the cross-designation, information-sharing, and deconfliction provisions of the 2009 Agreement, we analyzed documentary and testimonial evidence from DEA and ICE officials responsible for negotiating and implementing the Agreement. Specifically, we analyzed the 2009 Agreement to determine how it establishes policies and procedures for cross-designating ICE agents to conduct counternarcotics investigations, provides for the sharing of information pertinent to counternarcotics investigations, and addresses deconfliction and coordination between DEA and ICE on counternarcotics investigations. As agreed with your offices, we did not analyze the international provisions of the 2009 Agreement because these provisions pose unique sensitivity and law enforcement issues depending on the country or region involved. We also compared and contrasted the provisions of the 2009 Agreement with the prior 1994 DEA and U.S. Customs Service interagency agreement to determine similarities and differences in their provisions and assess how the 2009 Agreement addresses the pertinent recommendations in our March 2009 report. Additionally, we analyzed related interagency agreements (e.g., the August 2009 Agreement between the Organized Crime Drug Enforcement Task Force (OCDETF) and ICE) augmenting the provisions of the 2009 Agreement to determine the extent to which DEA and ICE are to coordinate and share information. Additionally, we interviewed DEA and ICE headquarters officials responsible for negotiating the 2009 Agreement to determine the priorities of each agency, issues on which the agencies differed, how officials negotiated and resolved these differences, and any remaining challenges. Furthermore, we analyzed relevant DEA and ICE headquarters documentation used to disseminate information on the provisions in the 2009 Agreement to DEA and ICE field offices, including directives to field offices describing actions to be taken to implement the Agreement, guidance for developing and establishing local deconfliction protocols, and policies and procedures to implement the provisions for cross- designating ICE agents to conduct counternarcotics investigations. Additionally, we compared and contrasted the cross-designation, information-sharing, and deconfliction provisions of the Agreement with the actions taken by DEA and ICE to implement these provisions to determine the extent the agencies had implemented the Agreement in these three areas. Two GAO analysts independently reviewed the information on actions DEA and ICE took related to each provision to determine the extent to which they were addressed and compared their results. No differences occurred between these analysts’ assessments. Specifically, we determined whether these actions were (1) fully implemented––met all provision requirements; (2) partially implemented– –met some but not all provision requirements (i.e., actions were taken, but further actions were necessary to complete the implementation of the provisions); or (3) not implemented—had not implemented any of the provisions of the Agreement. In addition, to obtain a field perspective on the provisions of the 2009 Agreement and its implementation, we conducted telephone interviews in DEA and ICE field offices with two separate groups––(1) field management and (2) first-line supervisors in selected offices. Specifically, we interviewed  Management-level officials, including the Special Agent in Charge (SAC) and Assistant Special Agents in Charge (ASAC), particularly those ASACs assigned by their respective DEA or ICE offices to serve as Title 21 coordinators, who are to ensure cooperation, communication, coordination, and deconfliction in Title 21 matters affecting their respective offices; and  First-line supervisors, including Resident Agents in Charge (RAC) and Group Supervisors, who are responsible for supervising agents, have experience coordinating with their DEA or ICE counterparts on drug investigations, and are located in the areas covered by the protocol. To make our selection of DEA and ICE offices, we analyzed the total 28 local deconfliction protocols, developed per the 2009 Agreement, to identify the corresponding DEA (21) and ICE (26) domestic field offices responsible for implementing each protocol. We selected 8 DEA and 8 ICE offices, responsible for implementing the same protocol. To make our selection we used the following criteria: geographic dispersion (e.g., northern border, southern border, and interior); the number and proportion of ICE agents cross-designated to conduct counternarcotics investigations; unique provisions in the local deconfliction protocols, which vary from the template DEA and ICE headquarters distributed to the field offices; metropolitan areas most often identified as originating and receiving drug shipments by the Department of Justice’s 2010 National Drug Threat Assessment; and sites recommended by DEA or ICE headquarters officials to illustrate different enforcement situations that could affect implementation of the Agreement. The 8 corresponding offices we selected were: San Diego, California; Houston/San Antonio, Texas; Miami, Florida; Seattle, Washington; New York City/Buffalo, New York; Detroit, Michigan; Atlanta, Georgia; and St. Louis, Missouri/St. Paul, Minnesota. Table 4 shows the area covered by the eight protocols and the corresponding DEA and ICE offices responsible for implementing each of these protocols. During these interviews, we discussed the actions taken by the field offices to implement the 2009 Agreement; field perspectives on how the Agreement was working (e.g., the extent to which the Agreement addressed problems; if it did, how was this done; and, if not, any possible solutions); implementation of local deconfliction protocols (e.g., the extent to which they addressed problems; if they did, how was this done; and, if not, any possible solutions); benefits and challenges of the 2009 Agreement; and any remaining challenges and possible solutions. We analyzed the information obtained through the interviews and summarized the views of each group. For each field site, we compared and contrasted the perspectives of DEA and ICE interviewees to identify similarities and differences in their views. We compared and contrasted headquarters officials’ views as they relate to field officials’ views. We analyzed the actions taken using the 2009 Agreement. Additionally, we analyzed whether each agency’s measures for assessing performance support participation in interagency investigations. (See app. II.) Furthermore, we asked officials in DEA and ICE field offices about their perceptions of any changes that had occurred due to implementation of the 2009 Agreement. However, we were not able to assess whether changes have actually occurred because there were no defined measures to demonstrate change and no quantitative data from the period prior to the Agreement against which to assess any changes even if measures had been identified. Because we conducted group interviews and did not select the field offices randomly, our results are not generalizable to all DEA and ICE field offices nationwide. However, this information allowed us to provide perspectives about the implementation of the 2009 Agreement and illustrative examples of what is and is not working well. To assess the extent to which DEA and ICE have taken actions to monitor the implementation of the 2009 Agreement in domestic offices and make any needed adjustments, we analyzed the 2009 Agreement to identify actions to be taken to monitor its implementation and any documentation, such as guidance or communications to the field from DEA and ICE headquarters describing how the implementation of the Agreement has been monitored and any mechanisms in place to obtain information on how the Agreement is working in the field. We compared and contrasted any actions taken to implement the monitoring of the Agreement with internal control standards in the federal government and the monitoring process described in the 2009 Agreement. In addition, we interviewed DEA and ICE headquarters officials to ascertain actions taken to monitor and obtain feedback from the field on the implementation of the 2009 Agreement and identify any adjustments made based on the results of the monitoring process. Also, in our interviews of DEA and ICE management officials and first-line supervisors in selected offices, we asked about any actions taken to monitor the implementation of the Agreement in their offices, as well as mechanisms available to the field to provide feedback to headquarters on how the 2009 Agreement is working. Information from the interviews is not generalizable to all DEA and ICE field offices nationwide, but provided illustrative examples of these efforts. We conducted this performance audit from September 2010 through July 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. After the Acting DEA Administrator and Assistant Secretary for ICE signed the Interagency Agreement in June 2009, DEA and ICE headquarters directed their respective field offices to develop local deconfliction protocols to implement the Agreement. As a result, SACs covering the 21 DEA and 26 ICE field offices signed 28 protocols in 2010. Table 5 presents the area covered by each of the 28 protocols and the respective DEA and ICE office of the SAC who signed the protocol. DEA and ICE field office management and first-line supervisors also identified factors in addition to the mechanisms enumerated in the 2009 Agreement, which enhanced deconfliction between DEA and ICE in the field, under the 2009 Agreement. DEA and ICE field management and first-line supervisors interviewed generally believed that colocation of DEA and ICE agents in each others’ offices or on strike forces and task forces enhanced coordination, deconfliction, and information sharing because colocation facilitated interaction between DEA and ICE agents and enabled them to obtain information from the other agency’s data systems. For example, a colocated ICE agent could ask a DEA counterpart to check information through DEA’s Analysis and Response Tracking System (DARTS) or a DEA agent could ask the ICE agent to check information in ICE’s TECS system. Agents on task forces may have direct access to other agencies’ data systems. For example, an ICE agent on a DEA task force may have access to DARTS. DEA and ICE agents also continue to use personal contacts and relationships to deconflict in addition to or as an alternative when, for example, no HIDTA is located in the area. According to ICE headquarters officials, ICE is also providing its agents access to additional tools to enable them to deconflict investigations with DEA. For example, the ICE headquarters Law Enforcement Support and Information Management unit deployed a Web-based deconfliction tool that allows ICE agents to deconflict ICE subject information with DEA subject information. Additionally, ICE is in the process of providing its agents with access to DEA’s Internet Connectivity Endeavor (DICE). DICE is hosted on the internet for state, local, and other federal law enforcement personnel who do not have access to the DARTS application. ICE headquarters officials told us that ICE is taking actions to have DICE on line and operating well in all SAC offices by November 2011, including providing proper training to enable agents to use the system properly. Through DICE, ICE agents will have the same access to nationwide information on all crimes, as state, local, and other federal law enforcement personnel who do not have access to the DEA’s DARTS application. The 2009 Agreement also encourages joint investigations. ICE headquarters officials said that the primary incentive for conducting joint investigations is the ability to harness and merge the capabilities of both agencies and maximize all information. DEA headquarters officials cited the necessity of working joint investigations as the primary incentive for encouraging agents to participate in these types of investigations. DEA and ICE personnel at all levels reported that the following incentives are used to encourage joint investigations: Jointly working drug investigations, particularly for smaller offices, maximizes limited resources.  Working joint investigations is essential to their job duties, as a matter of course, particularly for larger, busier offices.  Similarly, some offices reported that they have a strike force environment, which requires agents to work joint investigations or are involved in task forces that provide opportunities for joint investigations. For example, agents from both DEA and ICE are encouraged to work Organized Crime Drug Enforcement Task Force (OCDETF) cases, which are major joint investigations that involve multiple agencies. DEA and ICE field officials cited the success of OCDETF investigations, as an incentive for agents to work joint OCDETF cases.  We reviewed DEA and ICE performance standards and identified measures for encouraging joint investigations and ensuring the success of drug investigations. Specifically, DEA’s performance elements included working relationships, which was defined as assisting others and building partnerships, including resolving past issues that may interfere with current or future partnerships, and working cooperatively with appropriate DEA groups and offices, other DEA divisions, federal, state, local, and international agencies. One of ICE’s core competencies is team work and cooperation, which encompassed (1) building effective partnerships that facilitate working across boundaries, groups, or organizations and (2) working constructively with others to reach mutually acceptable agreements to resolve conflicts.  Drug seizure statistics as well as the overall complexity and success of the investigation undertaken are used to evaluate field offices and agents. According to DEA and ICE officials, the agencies have each primarily used established processes to disseminate information about the 2009 Agreement. Additionally, the agencies incorporated information on the Agreement into existing agent training and manuals. After the signing of the 2009 Agreement, DEA and ICE headquarters disseminated information about the Agreement through their respective chains of command, communicated with field management about its provisions, and directed the SACs in corresponding DEA and ICE field offices to develop local deconfliction protocols to implement the Agreement, as previously discussed. DEA and ICE ASACs, RACs, and first-line supervisors in the 16 field offices we contacted confirmed that their respective SACs had conveyed the information about the 2009 Agreement and the local deconfliction protocols through the field chain of command. Specifically, DEA and ICE SACs reported discussing the Agreement with ASACs, RACS, and Group Supervisors through teleconferences, staff meetings, and management conferences. Field office management also reported disseminating guidance to first-line supervisors––RACs and Group Supervisors. DEA and ICE first-line supervisors we interviewed generally reported that after receiving information on the Agreement and local deconfliction protocols, they discussed the implementation of the Agreement and protocol with their respective groups of agents. We also reviewed the extent to which the agencies provided training on the Agreement and protocols. DEA and ICE headquarters officials said that the agencies did not provide specific training on the 2009 Agreement to the respective field management and agents, because the agencies determined that such training was not needed as the Agreement did not change agents’ counternarcotics investigative duties or practices. However, according to these DEA officials, DEA agent basic training covered the topics addressed by the 2009 Agreement, such as defining the difference between a domestic and international conspiracy, which addresses the determination of a nexus to the border. Additionally, DEA updated its agent manual in September 2009. The manual stated that the DEA liaison with ICE was to be governed by the June 18, 2009, Agreement and incorporated a copy of the entire Agreement. As a result, a DEA headquarters Operations official believed that extra or special training for implementing the 2009 Agreement was not necessary because within DEA there was a clear understanding of the implementation of the 2009 Agreement. In particular, the Title 21 Coordinators, who were charged with managing the implementation of the 2009 Agreement at the field level, understood the Agreement and were expected to disseminate the information to their agents. Similarly, ICE headquarters officials said that ICE agents needed no special training about the 2009 Agreement because it, along with the local deconfliction protocols, set out in writing what agents were already doing in the field (e.g., deconflicting investigations with DEA). An ICE headquarters Narcotics and Contraband Smuggling Unit official reported that the 2009 Agreement was introduced in basic training to the agents who receive training on drug investigations. He said that ICE headquarters had not provided any additional guidance or training to ICE field offices regarding the implementation of the 2009 Agreement or the local deconfliction protocols because counternarcotics is part of ICE’s core mission and the 2009 Agreement did not change what ICE headquarters or field offices did to fulfill that mission. Furthermore, according to this official, ICE plans to update its Drug Smuggling Handbook once the protocols for internationally controlled deliveries and adjustments to the cross-designation process are finalized, making all revisions at one time rather than multiple minor updates. Additionally, DEA and ICE field management said that DEA and ICE agents did not need additional training because Group Supervisors were already deconflicting, had good relationships with their counterparts, were colocated, and served on joint task forces. Both DEA and ICE first-line supervisors also told us that senior agents explained the local deconfliction protocols to new agents in the office. In addition to the contact name above, Leyla Kazaz and Mary Catherine Hult managed this assignment. Robin D. Nye and Barbara A. Stolz made significant contributions to the work. Willie Commons provided significant legal support and analysis. David P. Alexander assisted with design and methodology. Lara R. Miklozek and Debra B. Sebastian provided assistance in report preparation. Tina Cheng developed the report graphics.
The 2010 National Drug Threat Assessment stated that the availability of illicit drugs is increasing. The Drug Enforcement Administration (DEA), in the Department of Justice (DOJ), works with Immigration and Customs Enforcement (ICE), within the Department of Homeland Security (DHS), to carry out drug enforcement efforts. DEA and ICE signed a 2009 Interagency Agreement (Agreement) that outlined the mechanisms to provide ICE with authority to investigate violations of controlled substances laws (i.e., cross-designation). The Agreement also required DEA and ICE to deconflict (e.g., coordinate to ensure officer safety and prevent duplicative work) counternarcotics investigations, among other things. GAO was asked to assess the Agreement's implementation. This report addresses the extent to which DEA and ICE have taken actions (1) to implement the Agreement's cross-designation, deconfliction, and information-sharing provisions and (2) to monitor implementation of the Agreement and make needed adjustments. GAO analyzed documents such as the 2009 Agreement, related interagency agreements, and directives to field offices. GAO also interviewed DEA and ICE Headquarters officials as well as management officials and first line supervisors in 8 of the 21 DEA and 8 of 26 ICE field offices, based on geographic dispersion. Though not generalizable to all DEA and ICE offices, the interviews provided insights. DEA and ICE have taken actions to fully implement the cross-designation and deconfliction provisions of the Agreement, and are finalizing efforts to complete the information-sharing provisions. The Agreement allows ICE to select an unlimited number of agents for cross-designation consideration by DEA. The agencies have implemented these cross-designation provisions through a revised process that (1) elevated the levels at which requests are exchanged between the agencies and (2) consolidated multiple requests into one list of ICE agents. This new process is more streamlined and has resulted in enhanced flexibility in maximizing investigative resources, according to ICE officials. Also, DEA and ICE implemented local deconfliction protocols and used a variety of mechanisms (e.g., local deconfliction centers) to deconflict investigations. Further, in May 2011 DEA and ICE convened the Headquarters Review Team (HRT), comprised of senior managers from both agencies, who are, among other things, to resolve deconfliction and coordination issues that cannot be resolved at lower levels because they require management decisions. DEA and ICE headquarters and field office management officials GAO interviewed generally reported that the implementation of the Agreement and local deconfliction protocols had generally improved deconfliction by (1) ensuring officer safety and (2) preventing one agency's law enforcement activity from compromising the other agency's ongoing investigation. ICE has also partially implemented the Agreement's information-sharing provisions by sharing required data with two DOJ organizations that target drug trafficking organizations, and taking steps to share its drug-related data with a DEA organization focused on disrupting drug trafficking by fall 2011. DEA and ICE have conducted ongoing monitoring of the Agreement's implementation through established processes (e.g., supervisory chains of command) and according to officials from these agencies, the HRT did not identify any systemic issues. Specifically, DEA and ICE headquarters officials routinely coordinated with each other and their respective field offices to monitor the Agreement's implementation. DEA and ICE headquarters officials also said that the May 2011 meeting of the HRT, which is to periodically review the Agreement's implementation, constituted a review of the Agreement and affirmed that there were no overarching or systemic issues of coordination or deconfliction requiring headquarters-level intervention. DEA and ICE provided technical comments, which GAO incorporated as appropriate.
Mr. Chairman and Members of the Subcommittee: We are pleased to have this opportunity to assist in your review of the Internal Revenue Service’s (IRS) operations. As you requested, our statement today will cover three areas (1) IRS’ efforts to correct management and technical weaknesses that have impeded its Tax Systems Modernization (TSM) program as well as whether IRS can successfully complete the program within the time frames and cost figures it has established; (2) IRS efforts to collect delinquent tax debts and deal with its accounts receivable problems; and (3) the viability of return-free filing as an option to the current tax filing system. Our testimony, which is based on past reports and ongoing work, makes the following points: IRS’ efforts to modernize tax processing are jeopardized by persistent and pervasive management and technical weaknesses. Our July 1995 report made specific recommendations that were intended to correct many of these weaknesses by December 31, 1995. IRS has initiated some activities to address these weaknesses. However, these weaknesses have not been corrected and ongoing efforts provide little assurance that weaknesses will be corrected. IRS has continued with plans to spend billions more on TSM solutions with little confidence of successfully delivering effective systems within established TSM time frames and cost figures. Inaccurate data and IRS’ antiquated and rigid collection process continue to hinder its efforts to stem the growth of its accounts receivable and improve collection of delinquent debts. Little progress has been made in resolving the underlying causes of these problems since 1988, when IRS’ accounts receivable was first identified as a high-risk area. Both the private sector and other government entities could offer IRS valuable lessons in improving its collections performance. The size of IRS’ total inventory of tax debts—$166 billion at the end of fiscal year 1994—is deceiving because it is an accumulation of debts for a 10-year period and includes debts that are clearly uncollectible—i.e., those of defunct businesses and deceased taxpayers. The inventory also includes accounts that have been established for compliance reasons and that may not be valid receivables. According to IRS estimates, the net result is that only about 20 percent of the inventory, or about $35 billion, is potentially collectible. According to IRS data, collections of delinquent taxes, while increasing to $25.1 billion in fiscal year 1995, are still below the high of $25.5 billion collected in fiscal year 1990. Because of IRS’ decision to absorb fiscal year 1996 budget cuts by reducing collections staffing, IRS projects that collections will decrease about 13 percent in fiscal year 1996—to about $21.9 billion. While return-free filing could provide benefits to both the taxpayer and IRS, certain impediments would have to be overcome for successful implementation. Modernizing tax processing is key to IRS’ vision of a virtually paper-free work environment in which taxpayer information is readily available to IRS employees to update taxpayer accounts and respond to taxpayer inquiries. In July 1995, we reported on the need for IRS to have in place sound management and technical practices to increase the likelihood that TSM’s objectives will be cost-effectively and expeditiously met. A 1996 National Research Council report on TSM has a similar message. Its recommendations parallel the more than a dozen recommendations we made involving IRS’ (1) business strategy to reduce reliance on paper, (2) strategic information management practices, (3) software development capabilities, (4) technical infrastructures, and (5) organizational controls. The Treasury, Postal Service and General Government Appropriations Act of 1996 “fences” $100 million in TSM funding until the Secretary of the Treasury reports to the Senate and House Appropriations Committees on the progress IRS has made in responding to our recommendations with a schedule for successfully mitigating deficiencies we reported. The conference report on the act directed that we assess for the Committee the status of IRS’ corrective actions. As of March 4, 1996, the Secretary of the Treasury had not reported to the Committees on TSM. This testimony is a progress report to the Committee on actions taken as reported to us by IRS officials. private and public sector organizations that have been successful in improving their performance through strategic information management and technology. These fundamental best practices are discussed in our report Executive Guide: Improving Mission Performance Through Strategic Information Management and Technology (GAO/AIMD-94-115, May 1994), and our Strategic Information Management (SIM) Self-Assessment Toolkit (GAO/Version 1.0, October 28, 1994, exposure draft). To evaluate IRS’ software development capability, we validated IRS’ August 1993 assessment of its software development maturity based on the Capability Maturity Model (CMM) developed in 1984 by Carnegie Mellon University’s Software Engineering Institute, a nationally recognized authority in the area. This model establishes standards in key software development processing areas (i.e., requirements management, project planning, project tracking and oversight, configuration management, quality assurance, and subcontractor management) and provides a framework to evaluate a software organization’s capability to consistently and predictably produce high-quality products. When we briefed the IRS Commissioner in April 1995 and issued our report documenting its weaknesses in July 1995, IRS agreed with our recommendations to make corrections expeditiously. At that time, we considered IRS’ response to be a commitment to correct its management and technical weaknesses. In September 1995, IRS submitted an action plan to Congress explaining how it planned to address our recommendations. However, this plan, follow-up meetings with senior IRS officials, and other draft and “preliminary draft” documents received through early March 1996 have provided little tangible evidence that actions being taken will correct the pervasive management and technical weaknesses that continue to place TSM, and the huge investment it represents, at risk. Our ongoing assessment has found that IRS has initiated a number of activities and made some progress in addressing our recommendations to improve management of information systems; enhance its software development capability; and better define, perform, and manage TSM’s technical activities. However, none of these steps, either individually or in the aggregate, has fully satisfied any of our recommendations. Consequently, IRS today is not in an appreciably better position than it was a year ago to ensure Congress that it will spend its 1996 and future TSM appropriations judiciously and effectively. We reported that IRS was drowning in paper—a serious problem IRS can mitigate only through electronic tax filings. We noted that IRS would not achieve the full benefits that electronic filing can provide because it did not have a comprehensive business strategy to reach or exceed its electronic filing goal, which was 80 million electronic filings by 2001. IRS’ estimates and projections for individual and business returns suggested that, by 2001, as few as 39 million returns may be submitted electronically, less than half of IRS’ goal. We reported that IRS’ business strategy would not maximize electronic filings because it primarily targeted taxpayers who use a third party to prepare and/or transmit simple returns, are willing to pay a fee to file their returns electronically, and are expecting refunds. Focusing on this limited taxpaying population overlooked most taxpayers, including those who prepare their own tax returns using personal computers, have more complicated returns, owe tax balances, and/or are not willing to pay a fee to a third party to file a return electronically. refocus its electronic filing business strategy to target, through aggressive marketing and education, those sectors of the taxpaying population that can file electronically most cost-beneficially. IRS agreed with this recommendation and said that it had convened a working group to develop a detailed, comprehensive strategy to broaden public access to electronic filing, while also providing more incentives for practitioners and the public to file electronically. It said that the strategy would include approaches for taxpayers who are unwilling to pay for tax preparer and transmitter services, who owe IRS for balances due, and/or who file complex tax returns. IRS said further that the strategy would address that segment of the taxpaying population that would prefer to file from home, using personal computers. month with a goal to reduce paper tax return filings to 20 percent or less of the total volume by 2000. These initiatives could result in future progress toward increasing electronic filings. However, these initiatives have yet to culminate in a comprehensive strategy that identifies how IRS will reach its electronic filings goal, including how it plans to target those sectors of the taxpaying population that can file electronically most cost-beneficially, and what efforts it will make to develop requisite supporting systems. take immediate action to implement a complete process for selecting, prioritizing, controlling, and evaluating the progress and performance of all major information systems investments, both new and ongoing, including explicit decision criteria, and using these criteria, to review all planned and ongoing systems investments by June 30, 1995. In agreeing with these recommendations, IRS said it would take a number of actions to provide the underpinning it needs for strategic information management. IRS said, for example, that it was developing and implementing a process to select, prioritize, control, and evaluate information technology investments to achieve reengineered program missions. Since then, IRS has taken steps towards putting into place a process for managing its extensive investments in information systems. For example, IRS has created the executive-level Investment Review Board for selecting, controlling, and evaluating all information technology investments; developed initial and revised sets of decision criteria that it used last summer to rank and prioritize TSM projects and used in November 1995 to recommend additional changes to information systems resource allocations, respectively; developed its Investment Evaluation Handbook and Business Case Handbook to strengthen management decision-making on systems investments; and is using the Investment Evaluation Handbook to review operational TSM projects. Although these steps represent some progress in responding to our concerns, none of them to date—individually or collectively—has fully satisfied our recommendations. IRS has not demonstrated that it is following a well-defined, consistent, and repeatable information technology investment decision-making process for selecting, controlling, and evaluating its information technology initiatives and projects. In particular, working procedures, required decision documents, decision criteria, and reliable cost, benefit and return data needed for an investment process are not complete. IRS has not provided evidence to demonstrate how analyses are being conducted on all systems investments using such data as expected improvement in mission performance, costs to date, technical soundness, or pilot performance. Instead, IRS operates on the assumption that it will receive a specified funding ceiling for systems development and technology, and then determines how much funding can be eliminated from projects in order to lower overall modernization costs to that level. Over the last few months, we have communicated several concerns to IRS about weaknesses in its current investment process that continue to raise risks and erode confidence in the quality of decisions being made about TSM investments. These include: the absence of initial screening criteria to determine if IRS has developed sufficient data about an information technology project—such as benefit-cost analyses, proposed return-on-investment calculations, and an accepted return on investment threshold level used as a decisional cut-off point—in order for the investment review board to reach an informed funding decision; the lack of analysis and trade-offs being made among all proposed information technology investments as a single portfolio—such as spending on legacy, infrastructure, and proposed modernization projects—in order to fully justify a ranking and prioritization of modernization efforts; the lack of mechanisms to ensure that the results of IRS’ investment evaluation reviews, such as that recently completed on the Service Center Recognition/Image Processing System, are being used to modify selection and control decision-making processes or to change funding decisions for projects. We reported that, unless IRS improves its software development capability, it is unlikely to build TSM in a timely or economical manner, and systems are unlikely to perform as intended. To assess its software capability, in September 1993, IRS rated itself using the Software Engineering Institute’s CMM. IRS found that, even though TSM is a world-class undertaking, its software development capability was immature. IRS placed its software development capability at the lowest level, described as ad hoc and sometimes chaotic and indicating significant weaknesses in its software development capability. Our review also found that IRS’ software development capability was immature and weak in key process areas. For instance, a disciplined process to manage system requirements was not being applied to TSM systems, a software tool for planning and tracking development projects was not software quality assurance functions were not well defined or consistently systems and acceptance testing were neither well defined nor required, software configuration management was incomplete. immediately require that all future contractors who develop software for the agency have a software development capability rating of at least CMM level 2, and before December 31, 1995, define, implement, and enforce a consistent set of requirements management procedures for all TSM projects that goes beyond IRS’ current request for information services process, and for software quality assurance, software configuration management, and project planning and tracking; and Status of Tax Systems Modernization, Tax Delinquencies, and the Potential for Return-Free Filing define and implement a set of software development metrics to measure software attributes related to business goals. IRS agreed with these recommendations and said that it was committed to developing consistent procedures addressing requirements management, software quality assurance, software configuration management, and project planning and tracking. Regarding metrics, IRS said that it was developing a comprehensive measurement plan to link process outputs to external requirements, corporate goals, and recognized industry standards. Specifically regarding the first recommendation, IRS has (1) developed standard wording for use in new and existing contracts that have a significant software development component requiring that all software development be done by an organization that is at CMM Level 2, (2) developed a plan for achieving CMM Level 2 capability on all of its contracts, and (3) initiated plans for acquiring expertise for conducting CMM-based software capability evaluations of contractors and designated personnel to perform these evaluations. We found, however, no evidence that all contractors developing software for the agency are being required to develop it at CMM Level 2. For example, our review of an IRS electronic filing system being developed by a contractor found that the system was being developed at CMM Level 1. With respect to the second recommendation, IRS is updating three software development lifecycle methodologies, developed a draft quality audit procedures handbook, updated its requirements management request for information services document, and developed and implemented a requirements management course. IRS also evaluated its current contractor management processes, compared these processes with the CMM goals, and is considering improvement activities. However, to progress towards CMM Level 2, IRS must define and implement the detailed procedures to be used for completing the goals of CMM’s key process areas. Based on our assessment, we have found some activities to address our recommendations, but IRS still has not allocated the resources needed to define and implement these areas. It appears that IRS software development projects will continue to be built using ad hoc and chaotic processes that offer no assurance of successful delivery. the metrics. According to IRS, although phase one has been completed, no metrics have been defined, and implementation is currently planned for sometime between June 1996 and January 1997. In this regard, although IRS has begun to act on our recommendations, systems are still being developed without the data and discipline needed to give management assurance that they will perform as intended. before December 31, 1995, complete an integrated systems architecture, including security, telecommunications, network management, and data management; institutionalize formal configuration management for all newly approved projects and upgrades and develop a plan to bring ongoing projects under formal configuration management; develop security concept of operations, disaster recovery, and contingency plans for the modernization vision and ensure that these requirements are addressed when developing information system projects; develop a testing and evaluation master plan for the modernization; establish an integration testing and control facility; and complete the modernization integration plan and ensure that projects are monitored for compliance with modernization architectures. IRS agreed with these recommendations and said that it was identifying the necessary actions to define and enforce systems development standards and architectures agencywide. IRS’ current efforts in this area follow: IRS is developing a “descriptive overview” of an integrated systems architecture, which, for example, includes a security architecture chapter. A draft of the descriptive overview is due in April 1996, and an executive summary is due in mid-March. IRS has developed and distributed a Configuration Management Plan template, which identifies the elements needed when constructing a configuration management plan, and established a charter for its Configuration Management branch. IRS has prepared a security concept of operations and a disaster recovery and contingency plan. IRS has developed a test and evaluation master plan for TSM. IRS is in the process of establishing an interim integration testing and control facility but has not determined an initial operating date. It is also planning a permanent integration testing and control facility, scheduled to be completed by the end of 1996. IRS has completed an informal draft of its TSM Release Definition Document and a draft of its Modernization Integration Plan. These activities start to address our recommendations. However, they do not fully satisfy any of our recommendations for the following reasons. First, IRS has not completed an integrated systems architecture (the “blueprints” of TSM), and no evidence has been provided to suggest that it will have one in the foreseeable future. The draft architecture documents received are high-level descriptions that fall far short of the level of detail needed to provide effective guidance in designing and building systems. For example, IRS’ concept of a three-tier, distributed architecture does not provide sufficient detail to understand the security requirements and implications. It does not, for instance, specify what security mechanisms are to be implemented between and among the three tiers to ensure that only properly authorized users are allowed to access tax processing application software and taxpayer data. Second, IRS has not brought its development, acceptance, and production environments under configuration management control. For example, there is no disciplined process for moving software from the test to the production environment. currently being implemented on systems now being developed and does not indicate how, when, or if these inconsistencies will be resolved. Fourth, IRS’ disaster recovery and contingency plan is a high-level document for planning that presents basic tenets for information technology disaster recovery but not the detail needed to provide guidance. For example, it does not explain the steps that computing centers need to take to absorb the workload of a center that suffers a disaster. Fifth, the test and evaluation master plan provides the guidance needed to ensure sufficient developmental and operational testing of TSM. However, it does not describe what security testing should be performed, or how these tests should be conducted. Further, it does not specify the responsibilities and processes for documenting, monitoring, and correcting testing and integration errors. Sixth, the plans for IRS’ integration testing and control facility are inadequate. The purpose of an off-line test site is to provide a safe, controlled environment for testing that realistically simulates the production environment. This permits new hardware and software to be thoroughly tested without putting IRS operations and service to taxpayers at risk. However, current plans for the facility do not provide for the testing of all IRS software prior to nationwide delivery. It is unclear why this position has been taken or how difficult and expensive it will be to make the modifications needed to enable the facility to effectively replicate its operational environment. Finally, IRS’ draft TSM Release Definition Document and Modernization Integration Plan have not been finalized. In addition, they (1) do not reflect TSM rescoping and the information systems reorganization under the Associate Commissioner; (2) do not provide clear and concise links to other key documents (e.g., its integrated systems architecture, business master plan, concept of operations, and budget); and (3) assume that IRS has critical processes in place that are not implemented (e.g., effective quality assurance and disciplined configuration management). Information Officer, and research and development division. To help address this concern, in May 1995, the Modernization Executive was named Associate Commissioner. The Associate Commissioner was to manage and control systems development efforts previously conducted by the Modernization Executive and the Chief Information Officer. In September 1995, the Associate Commissioner for Modernization assumed responsibility for the formulation, allocation, and management of all information systems resources for both TSM and non-TSM expenditures. In February 1996, IRS issued a Memorandum of Understanding providing guidance for initiating and conducting technology research and for transitioning technology research initiatives into system development projects. give the Associate Commissioner management and control responsibility for all systems development activities, including those of IRS’ research and development division. We are concerned that IRS still has not established an organizationwide focus to consistently manage and control information systems. Specifically, we have seen no evidence that systems development, upgrades, and replacements at IRS field locations are being controlled by the Associate Commissioner. Although the Associate Commissioner was given authority for the formulation, allocation, and management of all information systems resources for TSM and non-TSM systems, the research and development division still retains approval authority for initiating technology research projects and for conducting proof-of-concept systems prototypes. It is unclear whether the building processes and budget used for these systems development areas are controlled by the Associate Commissioner. Again, despite some improvements in consolidating management control over systems development, IRS still does not have a single entity with the responsibility and authority to control all of its information systems projects. The growth in IRS’ inventory of tax debt, coupled with its inability to collect a significant portion of tax delinquencies, prompted us and OMB to designate IRS’ accounts receivable as a high-risk area several years ago. Since that initial designation, IRS has made little progress in resolving the problems at the root of its poor collections performance. As shown in figure 1, its inventory of tax debt grew almost 80 percent, while collections declined about 8 percent from 1990 to 1994. While collections of delinquent taxes increased in fiscal years 1994 and 1995 to $23.5 billion and $25.1 billion, respectively, IRS projects a 13-percent decrease in collections in fiscal year 1996 to $21.9 billion because of its decision to reduce collections staffing due to cuts in its fiscal year 1996 budget. This amount would represent the lowest level of delinquent collections since fiscal year 1986. We realize that it is not an easy task for IRS to fix the underlying causes of its accounts receivable problems. IRS has undertaken many efforts in attempting to do so. However, some of these efforts have been curtailed, and others have produced limited improvements. Further, IRS is in the process of rethinking and rescoping many of its modernization and operational initiatives that would affect accounts receivable and collections. But, despite these initiatives, IRS’ efforts do not reflect a comprehensive strategy to address the underlying causes of the problems that cut across the agency and across lines of managerial authority and responsibility. When discussing the problems affecting IRS’ receivables, it is important to understand the nature of the tax debt inventory. In the simplest terms, this inventory represents delinquent taxes recorded in IRS’ records as being owed by taxpayers. Delinquent taxes are to remain in the inventory until they are paid or abated, or until the 10-year collection statute of limitations expires. While much attention has been focused on the size of IRS’ tax debt inventory—which as of September 30, 1994, was $166 billion—this figure is deceiving for several reasons. Primary among these is the fact that this figure includes an IRS estimated $97 billion in potential taxes that have been assessed but which may not be valid receivables. in the full or partial abatement of the tax debt, the amount recorded is not a valid receivable for financial reporting purposes. In the past, IRS used a statistical sampling methodology to estimate the compliance and financial portions of the inventory for financial statement purposes. Using this methodology, IRS estimated that, of the $166 billion tax debt inventory, about $69.2 billion represented financial receivables. IRS recently developed a methodology to identify how much of its inventory of tax debts represents these types of assessments. However, we found that the data upon which the analysis was based was flawed. IRS’ inventory of tax debt also includes delinquent debt that may be up to 10 years old. This is because there is a 10-year statutory collection period and IRS generally does not write off uncollectible delinquencies until the 10 years is over. As a result, the receivables inventory includes accounts up to 10 years old that may be impossible to collect because the taxpayers are deceased or the corporations are defunct. Of the $166 billion total receivables inventory, IRS data show that $1.7 billion was owed by deceased taxpayers and $19.1 billion was owed by defunct corporations. During a review of accounts receivable cases greater than $10 million as of September 30, 1995, we identified several examples that illustrate problems with IRS’ accounts receivable inventory. For example, out of a total of 460 accounts receivable cases that we reviewed, IRS identified 258 as currently not collectible: 198 of these represented defunct corporations, while the remaining 60 cases represented entities that either could not pay or could not be located. These cases represented $12 billion of the $26 billion included in accounts greater than $10 million. The age of the receivable also does not reflect the additional time it took for IRS to actually assess the taxes in the first place. In many cases, IRS’ processing and use of certain taxpayer-related information to identify delinquent debt is a significant factor in determining the ultimate collectibility of the debt. Enforcement tools, such as its matching programs and tax examinations, may take up to 5 years from the date the tax return is due, thus reducing the likelihood that the outstanding amounts will be collected. these and other factors, IRS considers many of the accounts in the inventory to be uncollectible. IRS estimated that only about $35 billion of the $166 billion inventory of tax debt was collectible. However, for 3 of the 4 years we audited IRS’ financial statements, we could not determine the reliability of IRS’ estimate of accounts receivable and the related estimated collectable amount. We were only able to do so for fiscal year 1992, the first year we audited IRS. That year, we tested the validity of amounts IRS reported using a statistical sample. This resulted in an estimate of $28 billion in collectable accounts receivable. For the subsequent 2 years (fiscal years 1993 and 1994), IRS performed its own statistical sample to determine the collectability of its accounts receivable. As part of our audit, we assessed the reasonableness of these samples and found that we could not validate IRS’ estimates. Our inability to rely on these estimates was based on discrepancies between underlying documentation we audited and IRS’ reported balances. As we reported in our February 1995 high-risk report, IRS’s accounts receivable problems reflect pervasive problems throughout IRS’ processes that cumulate in the tax debt inventory and IRS’ difficulties in addressing the underlying causes of these problems. For example, the failure of returns processing to correctly account for a taxpayer’s payment may result in the creation of an invalid account receivable; the failure of taxpayer service to promptly resolve a taxpayer’s inquiry about a delinquent account may perpetuate the receivable; and an IRS compliance effort that overstates a taxpayer’s liability also inflates the inventory, makes additional work for collection personnel, and offers little guarantee of revenue generation. is available, IRS will continue to waste time and resources pursuing debts that are not real and thus do not generate revenue. Improving data accuracy and reliability is a key objective of TSM, but progress has been slow and the future success of TSM is uncertain. In addition, until IRS can effectively identify who owes the tax receivables and successfully implements a financial management system that ties its collection results to its operations, it is difficult, if not impossible, to gauge the return achieved from its collection efforts or how effective IRS or anyone could be in collecting outstanding tax receivables. Second, IRS’ collection process was introduced several decades ago and, although some changes have been made, the process generally is rigid, costly, and inefficient. The three-stage collection process—computer-generated notices and bills, telephone calls, and personal visits by collection employees—takes longer and is more costly than collection processes in the private sector. While the private sector emphasizes the use of telephone collection calls, a significant portion of IRS’ collection resources are devoted to personal visits made by revenue officers. IRS has initiated programs and made procedural changes to speed up the collection process, but historically it has been reluctant to reallocate resources from the field to the earlier, more productive collection activities. Due to budget cuts, however, IRS is in the process of temporarily reassigning about 300 field staff to telephone collection sites to replace temporary employees who were terminated. In addition to IRS’ problems with identifying who currently owes taxes and the amount it can expect to collect, it has lacked the capability to accurately track the revenues realized from its various collection efforts. To address this problem, IRS has been developing the Enforcement Revenue Information System (ERIS). ERIS was designed to account for actual collections resulting from IRS’ enforcement efforts and to enable IRS to more accurately measure and predict enforcement costs and revenues. However, its implementation was delayed because of inaccuracies found in the system’s data; we are currently reviewing the system to assess its reliability. planning to implement a reengineering project that will involve all IRS activities that enable taxpayers to fulfill their tax obligations. Third, while Congress has given IRS strong tools, such as levies and seizures, to collect delinquent taxes, it has also established a number of statutory safeguards to prevent their unwarranted use. An unintended result of these safeguards has been to hamper collections. For example, the 1988 Taxpayer Bill of Rights prohibits IRS from evaluating the performance of its staff on the basis of dollars collected. Without the use of this measure, which is used by most private-sector collectors, IRS staff have less incentive to collect taxes. Their performance evaluations do not distinguish between collection actions that essentially write off a tax debt and actions that result in the collection of taxes owed—both are considered case closings. This practice may be one reason why IRS field collection staff have been declaring more tax debts “currently not collectible” each year than they collect. We understand the importance of balancing the need to protect the rights of taxpayers against the need to collect tax debts. While IRS must be fair and follow appropriate laws and regulations, taxpayers must also accept their lawful tax obligations. Those who evade this obligation cause all other taxpayers to bear a disproportionate share of the overall tax burden. Fourth, IRS’ organizational structure, with its considerable sharing of responsibility for collecting tax debts, provides little accountability for results. IRS is in the process of rethinking and restructuring its organization, including reducing the number of employees and the number of regional and district offices and service centers, but the impact of these changes, if any, on the accounts receivable problems will not be felt for several years. Fifth and finally, staffing imbalances among IRS field offices have resulted in staff being available in some offices to pursue both small and large debts, while in other offices even large debts might go uncollected because of staff shortages. In addition, as mentioned earlier, IRS historically has allocated two-thirds of its collection staff to the field, which comprises the last and least productive stage of the collection process. This is in contrast to private-sector collectors, who devote most of their resources to the earlier telephone stage. Several staffing-related projects have been affected by IRS’ actions taken in response to its reduced appropriations for fiscal year 1996. One of these projects was focused on redesigning the operation of collection groups in the field to improve productivity and reduce costs. Although preliminary results appeared to IRS to be positive, IRS decided to stop the project in October 1995 for budgetary reasons. This Subcommittee’s concern of several years’ duration about IRS’ delinquent tax collection efforts led to the provisions contained in IRS’ fiscal year 1996 appropriations bill that earmark $13 million for a pilot program to test the use of private law firms and debt collection agencies to help collect delinquent tax debts. IRS issued a request for proposals from prospective participants in the pilot program on March 5, 1996. If done successfully, this program may open a new avenue for addressing some of IRS’ collection problems. We recognize that IRS has many initiatives under way that could help to resolve the accounts receivable problem. But, we also recognize that IRS has pursued many initiatives over the years without bringing about the desired change. IRS is in the process of rescoping many of its planned modernization and operational initiatives because of changed budget priorities. However, a comprehensive strategy to guide IRS’ efforts to improve collections and accounts receivable has not been developed. This strategy, which is critical to the successful resolution of IRS’ accounts receivable problems, must recognize and address the five underlying causes of the problem—causes that cut across the agency and across lines of managerial authority and responsibility. Almost 100 million American taxpayers currently must file tax returns, even though most have fully paid their taxes through the withholding system. Given its potential for reducing taxpayer burden and IRS paper processing, we have been studying return-free filing systems and the potential impact they would have on the federal income tax system. While we are still in the process of finalizing our results, we can provide some preliminary information on (1) the two most common types of return-free filing, (2) the number of American taxpayers that could be affected by return-free filing, and (3) some of the issues that would need to be addressed before such a system could be used. In countries with return-free filing, the most common type of system we identified was that termed “final withholding.” Under this system, the withholder of income taxes determines the taxpayer’s liability and withholds the correct tax liability from the taxpayer. With the final year-end payment to the taxpayer, the withholder makes a final reconciliation of taxes and adjusts the withholding for that period to equal the year’s taxes. Another type of return-free filing—known as “agency reconciliation”—depends entirely on information reporting and allows the tax agency to determine the taxpayer’s taxes based on these information documents. The tax agency then sends the taxpayer either a refund or a tax bill based on the tax liability and the amount of withholding. We identified 36 countries that use some form of return-free filing—34 with final withholding and 2 with tax agency reconciliation. Given the extent of withholding and information reporting that exists under our current tax system, we estimated that about 18.5 million American taxpayers whose incomes derive from only one employer could be covered under a final withholding system. Alternatively, an estimated 51 million taxpayers could be covered under the agency reconciliation system. We estimate that taxpayers could save 52 million hours in preparation time and millions of dollars in tax preparation costs under the final withholding system, and 170 million hours and millions of dollars in preparation time and costs under the tax agency reconciliation system. IRS would also save an estimated $45 million in processing costs under the final withholding system, and about $36 million under the tax agency reconciliation system, in processing and compliance costs. Employers would face additional burden and costs under the final withholding system, but we were unable to determine how much. system so that tax liabilities could be determined before April 15, which is also the tax filing deadline for some states. IRS’ own 1987 study of return-free filing recognized this processing problem and recommended against return-free filing for that reason. However, given the many processing changes envisioned with the modernization of IRS’ computer systems, this problem may be less of an obstacle than it was in 1987. Given the current tax system, a tax agency reconciliation system has the potential to reduce the filing burden on more taxpayers and also put less burden on payors than a final withholding system. In summary, IRS’ TSM and delinquent debt collection efforts remain a serious concern to us. Although IRS is attempting to address some of the problems, their underlying causes remain and continue to hinder the potential for significant improvement. TSM, in particular, is at serious risk, and until the weaknesses are corrected, we believe that IRS’ ability to successfully complete the program will remain highly questionable. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the Internal Revenue Service's (IRS) Tax Systems Modernization (TSM) Plan, focusing on: (1) IRS efforts to correct TSM management and technical weaknesses within an established time frame and cost figure; (2) IRS plans to collect delinquent taxes and correct accounts receivable discrepancies; and (3) the viability of return-free filing. GAO noted that: (1) IRS has attempted to address its management and technical weaknesses, but its initiatives do not satisfy previous recommendations or provide assurance that the problems will be timely corrected; (2) IRS continues to spend billions of dollars on TSM solutions, but it has little confidence in its ability to deliver an effective system within the established TSM time frame and cost figure; (3) IRS is rethinking its modernization and operational initiatives related to accounts receivable and delinquent taxes, but it projects a 13-percent decrease in collections for fiscal year 1996; and (4) return-free filing is a viable option if taxpayers continue to provide information regarding their tax status and number of dependents, and employers are legally authorized to compute tax liabilities under final withholding.
CMS, an operating division of HHS, administers Medicare, Medicaid, and the State Children’s Health Insurance Program. As administrator of Medicare, which paid about $215 billion in benefits to approximately 39.5 million Medicare beneficiaries in fiscal year 2000, CMS is the nation’s largest health insurer. Although most participating providers comply with Medicare billing rules, inadvertent errors or intentional misrepresentations that result in overpayments to providers do occur. These overpayments represent money owed back to Medicare. According to the HCFA Financial Report for Fiscal Year 2000, about $8.0 billion out of $8.1 billion of the debts reported owed to CMS originated in the Medicare program. CMS Medicare debts consist largely of overpayments to hospitals, skilled nursing facilities, physicians, and other providers of covered services and supplies under Part A (hospital insurance) and Part B (supplemental medical insurance) of the Medicare program. We examined two types of Medicare debts: Medicare secondary payer (MSP) debts. MSP debts arise when Medicare pays for a service that is subsequently determined to be the financial responsibility of another payer. Cases that result in MSP debts include those in which beneficiaries have (1) other health insurance furnished by their employer or their spouse’s employer (or, in certain instances, another family member) that covers the medical services provided, (2) occupational injuries, illnesses, and conditions covered by workers’ compensation, and (3) injuries, illnesses, and conditions related to a liability or no-fault insurance settlement, judgment, or award. Non-MSP debts. Although Medicare is phasing out this payment method, Medicare has paid certain institutional providers interim amounts based on their historical service to beneficiaries. Medicare contractors retrospectively adjust these payments based on their review of provider costs. When a provider's cost-reporting year is over, the provider files a report specifying its costs of serving Medicare beneficiaries. Cost report debts arise when the cost report settlement process, which includes audits and reviews by Medicare contractors, determines that the amount an institution was paid based on its cost report exceeds the final settlement amount. Another type of non-MSP debt related to cost reporting is unfiled cost report debt. If an institutional provider fails to submit a timely cost report, CMS establishes an unfiled cost report debt. The amount of the debt equals the full amount disbursed for the year in which the provider failed to submit a timely report. Most providers have an ongoing business relationship with the Medicare program; therefore, contractors are able to collect most non-MSP debts by offsetting subsequent Medicare payments to providers. However, if offsetting subsequent payments does not fully liquidate the debt (e.g., because the provider has left the Medicare program), unpaid balances more than 180 days delinquent are subject to DCIA’s debt-referral requirements. CMS refers its eligible MSP and non-MSP debts to PSC, which provides debt management services for certain HHS operating divisions. Under DCIA, federal agencies are required to refer all eligible debts that are more than 180 days delinquent to Treasury or a Treasury-designated debt collection center. In 1999, Treasury designated PSC a debt collection center for HHS, allowing PSC to service certain debts, including MSP and unfiled cost report debts. PSC is responsible for attempting to collect MSP debts, obtaining cost reports for unfiled cost report debts, reporting MSP and unfiled cost report debts to TOP, and referring other types of Medicare debts to Treasury’s FMS for cross-servicing. In September 2000, we reported that CMS was slow to implement DCIA but could increase Medicare overpayment collections if it fully implemented the referral requirements of the act. We recommended, and CMS agreed, that CMS fully implement DCIA by transferring Medicare debts to PSC or Treasury for collection as soon as they became delinquent and were determined to be eligible. We also recommended that CMS refer the backlog of eligible Medicare debts to PSC as quickly as possible. We noted in the report that CMS had two pilot projects under way that were designed to expedite the transfer of delinquent Medicare debts for collection action. One pilot covered certain MSP debts valued at $5,000 or more, and the other covered certain non-MSP debts, primarily related to cost report audits, of $100,000 or more.Contractors participating in the pilots were to (1) verify the amount of a delinquent debt and ensure that it was still uncollected, (2) issue a DCIA intent letter indicating that nonpayment would result in the debt’s referral to PSC, and (3) record the debt in a central CMS database used to transmit the debt to PSC for collection.CMS’s goal is to have referred all eligible Medicare debts for collection action by the end of fiscal year 2002. As shown in table 1, CMS reported that about $6.6 billion of Medicare debts were more than 180 days delinquent or classified as currently not collectible (CNC) as of September 30, 2000. This information was reported in the Medicare Trust Fund Treasury Report on Receivables Due from the Public (TROR), which contained the most recent agency-certified information available during our review. Debts classified as CNC are written off the books for accounting purposes—that is, they are no longer carried as receivables. A write-off does not extinguish the underlying liability for a debt, and collection actions may continue to be taken on debts classified as CNC. Of the $6.6 billion of Medicare debts reported as more than 180 days delinquent or classified as CNC, CMS reported that it had referred approximately $2 billion of debts and had excluded from referral approximately $1.8 billion of debts. CMS also reported in the TROR that about $1.6 billion in unfiled cost reports were delinquent more than 180 days. Because CMS does not recognize amounts associated with unfiled costs reports as receivables for financial reporting purposes, the agency reports unfiled cost report debts more than 180 days delinquent as a separate, additional item in the TROR. With these exclusions and additions, CMS reported about $6.4 billion of Medicare debts eligible for referral to PSC for collection action as of September 30, 2000. Of the approximately $6.4 billion of Medicare debts that CMS had reported as eligible for referral by the end of fiscal year 2000, the agency reported that about $4.3 billion of the debts had not been referred to Treasury or a Treasury-designated debt collection center. About $2.6 billion of the unreferred amount was non-MSP debt, and the remainder was MSP debt. CMS’s goal for fiscal year 2001, which the agency met, was to refer an additional $2 billion of unreferred eligible debts. CMS’s goal for fiscal year 2002 is to refer the remainder of eligible Medicare debts. Our objectives were to determine whether (1) CMS was promptly referring eligible Medicare debts for collection action, (2) any obstacles were hampering CMS from referring eligible Medicare debts, and (3) CMS was appropriately using exclusions from referral requirements. Although CMS also administers Medicaid and the State Children’s Health Insurance Program, we limited our review to Medicare debts because the Medicare program is the source of the vast majority of CMS’s reported delinquent debt. To address our objectives, we obtained and analyzed the Medicare Trust Fund TROR for the fourth quarter of fiscal year 2000, which was the most recent agency-certified report available at the completion of our fieldwork, and other financial reports prepared by CMS. The most recent year-end TROR should contain the most reliable information available because Treasury requires that agency chief financial officers (or their designees) certify year-end data as accurate. We interviewed CMS and PSC officials to obtain an understanding of the debt-referral process and any obstacles that may be hampering referral of eligible debts. In addition, we reviewed CMS policies and procedures on debt referrals and examined current and planned CMS efforts to refer eligible delinquent debts. We also met with representatives from 4 selected CMS contractors that process and pay Medicare claims, and we discussed how they identified and referred eligible Medicare debts to PSC. At the time of our review, CMS had 55 Medicare contractors that processed claims and collected on overpayments. We used two criteria to select the 4 contractors: (1) the size of their debt portfolio and (2) whether the contractor participated in the CMS pilot projects. Specifically, 1 of the selected contractors had the largest amount of debt overall and the largest amount of Part A debt, 1 other selected contractor had the largest amount of Part B debt, and another of the selected contractors had the largest amount of MSP debt. We selected the fourth contractor to ensure that our review covered at least one-third of all the debt maintained at the CMS contractors. Three of the 4 contractors that we selected participated in the MSP pilot project, and 2 participated in the non-MSP pilot project. As agreed with your office, we did not test selected debts that were excluded from referral because the HHS OIG was performing detailed testing of CMS’s implementation of DCIA and the effectiveness of its debt collection and debt management activities. As part of its work, the OIG tested selected debts at CMS and its Medicare contractors to determine whether the status of debts had been appropriately categorized. We also did not independently verify the reliability of certain information that CMS and PSC provided (e.g., debts reported as more than 180 days delinquent). We performed our work from November 2000 to September 2001 in accordance with U.S. generally accepted government auditing standards. We requested written comments on a draft of this report from the administrator of CMS or his designated representative. CMS’s letter is reprinted in appendix I. We also considered, but did not reprint, the technical comments provided with CMS’s letter and have incorporated them throughout this report, where appropriate. Overall, CMS did not promptly refer all of its reported eligible Medicare debts in fiscal year 2001. Although CMS referred approximately $2.1 billion of Medicare debts during the year, almost all were non-MSP debts primarily related to cost report audits. Further, the vast majority of these debt referrals—about $1.9 billion—occurred late in the fiscal year, from June through September. While approximately $1.8 billion of eligible MSP debts were reported as eligible for referral as of September 30, 2000, CMS referred only about $47 million of MSP debts in fiscal year 2001. CMS made progress in referring non-MSP debts to PSC during fiscal year 2001, but most of the progress occurred late in the fiscal year. Problems with the debt-referral system contributed to the late referral of non-MSP debts. Although CMS reached its $2 billion referral goal for fiscal year 2001, both the prospects for collection during the year and the collectibility of the debts were likely diminished by the referral delays. At the end of fiscal year 2000, about $2.6 billion of non-MSP debts remained to be referred. Throughout most of fiscal year 2001, CMS made little progress in referring these debts. It was not until June 2001, approximately two-thirds of the way through the fiscal year, that CMS began making substantial referrals of non-MSP debts to PSC. Of the approximately $2.1 billion of non-MSP debts reported as being referred during fiscal year 2001, CMS referred about $1.9 billion of the debts from June through September. CMS officials stated that they were not significantly concerned by the low level of non-MSP debt referrals during the first two-thirds of fiscal year 2001 because they met their goal of referring $2 billion of eligible Medicare debts in fiscal year 2001 and they intend to meet their goal of referring the remaining eligible debts by the end of fiscal year 2002. However, the prompt referral of delinquent debts is critical because, as industry statistics indicate, the likelihood of recovering amounts owed on delinquent debts decreases dramatically as the age of the debt increases. CMS made little progress in referring the approximately $1.8 billion of MSP debts that were reported as eligible for referral as of September 30, 2000. Limited contractor efforts, coupled with inadequate monitoring of contractor performance by CMS, contributed to the slow progress. In addition, many existing MSP debts will never be referred because in February 2001 CMS instructed its Medicare contractors to close out MSP debts delinquent more than 6 years and 3 months, thereby terminating all collection efforts on such debts. Unreferred MSP debts represented about 40 percent of the approximately $4.3 billion of reported eligible Medicare debts that had not been referred for collection as of September 30, 2000. PSC collection reports show that the center has had comparatively more success in collecting MSP debts than it has had in collecting non-MSP debts. By the end of fiscal year 2001, PSC reported collecting almost as much on delinquent MSP debts as on delinquent non-MSP debts, even though the total dollar amount of MSP referrals was a small fraction, about 2 percent, of the total dollar amount of non-MSP referrals. CMS began referring MSP debts to PSC in March 2000. PSC records indicate that through September 30, 2001, CMS had referred only about $83 million, or 5 percent, of the approximately $1.8 billion of MSP debts eligible for referral to PSC as of September 30, 2000. Of this amount, about $47 million was referred in fiscal year 2001. These limited referrals were likely the only collection action taken on most of the eligible MSP debts from March 2000 through September 2001. In most cases, CMS instructed its contractors only to send initial demand letters to MSP debtors and follow up on any resulting inquiries. CMS did not establish and implement effective controls to promptly refer eligible Medicare debts to PSC for collection action. CMS failed to promptly refer non-MSP debts because the agency had problems with its debt-referral system. Limited contractor efforts, coupled with inadequate CMS monitoring of contractor performance, were primarily responsible for the slow progress in referring MSP debts. Because of a CMS policy to close out debts delinquent more than 6 years and 3 months, some debts will never be referred for collection action. In addition, CMS has not developed a process to report closed-out debts to IRS, even though discharged debt is considered income and may be taxable. Non-MSP debt referrals were delayed until late in fiscal year 2001 primarily because CMS suspended its debt-referral system in November 2000. According to a CMS official responsible for non-MSP debt referrals, the agency suspended the system in order to identify and correct numerous discrepancies found in the system’s data (e.g., duplicate debt entries, inconsistencies between debt amounts in the referral system and debt amounts in the tracking system) and to place additional edits in the system to prevent such errors in the future. CMS did not resume referring non- MSP debts to PSC through the debt-referral system until June 2001. Not only did CMS’s suspension of the debt-referral system limit the debt- referral activities of the 5 contractors participating in the non-MSP pilot, it also delayed CMS’s planned October 2000 expansion of the debt-referral program to all contractors. CMS did not issue updated instructions for referring non-MSP debts to each of its 55 contractors until April 2001. The guidance, revised in response to our September 2000 recommendation that all CMS debt be transferred to PSC as soon as it becomes delinquent and is eligible for transfer, expanded the criteria for referring non-MSP debts by including Part B debts, as well as Part A debts, and lowering the referral threshold from $600 to $25. After the debt-referral system began operating again and the referral requirements were expanded and extended to all contractors, CMS increased its referrals of non-MSP debts to PSC by about $1.9 billion from June through September 2001. The low referral of MSP debts in fiscal year 2001 occurred partly because for most of the year, until May 2001, only the 15 contractors participating in the pilot project were authorized to identify eligible Part A debts and refer them to PSC. According to information from CMS, as of September 30, 2000, these 15 contractors held a total of about $542 million of Part A debts that were more than 180 days delinquent, representing about 31 percent of MSP debts eligible for referral as of that date. In response to our September 2000 recommendation, CMS issued a program memorandum in May 2001 extending to all MSP contractors the requirement to identify delinquent MSP debts and refer them to PSC. CMS also expanded the referral criteria to include Part B debts, as well as Part A debts. The dollar threshold for referral is to be reduced in phases, from $5,000 to $25. The phased reduction is intended both to eliminate the backlog of higher-dollar debts and to ensure referral of current debts, thereby avoiding a continuing backlog. A CMS official stated that the memorandum was not issued sooner partly because CMS had to respond to contractors’ concerns that they needed additional funding to automate their debt-referral processes to comply with the new referral requirements. The CMS official stated that after much consideration, CMS concluded that referrals could be performed manually and that seeking additional funding for automation would likely cause further delays in referring MSP debts to PSC. Another factor that contributed to the low amount of MSP debt referred to PSC was the failure of certain pilot project contractors to promptly refer eligible debts. Under the MSP pilot project, contractors were required to identify eligible Part A debts, send DCIA intent letters (which state CMS’s intention to refer a debt for collection action if it is not paid within 60 days) to those debtors, and enter the debt information into the debt-referral system. We selected and reviewed the work of 3 large Medicare contractors that participated in the MSP pilot project and found that none of the 3 promptly identified and referred all eligible MSP debts. One of the contractors held $255 million of Part A MSP debt more than 180 days delinquent as of September 30, 2000. As of May 2001, the contractor reported that it had identified and sent out DCIA intent letters for only about $33 million, or about 13 percent, of the debt. The contractor official responsible for MSP debts stated that the contractor was under the impression that the pilot project required it to make only two file queries, in February 2000, to identify eligible debts and that the queries were to cover only debts incurred from March 1997 through August 1998. However, our review of the implementing instructions for the pilot project found that it was to cover all MSP debts that were not more than 6 years old, and CMS officials responsible for MSP debts advised us that they had never instructed the contractor to limit its file queries. Another of the 3 contractors whose work we reviewed held about $61 million of Part A MSP debt delinquent more than 180 days as of September 30, 2000. The contractor official responsible for MSP debts stated that the contractor believed that the MSP pilot project had ended in August 2000. As such, from September 2000 through December 2000, the contractor did not review its debt portfolio to identify additional MSP debts eligible for referral. The contractor subsequently began identifying and referring debts again in January 2001. In addition, the contractor’s records indicated that as of April 2001, about $6.2 million, or 48 percent, of the $12.8 million of debt for which it had sent DCIA intent letters prior to September 2000 had not been referred to PSC. These debts remained at the contractor even though they were well beyond the 60-day time frame CMS specified for referring debts to PSC after a DCIA intent letter is sent. The responsible contractor official was unable to explain why the debts had not been referred for collection action. Before our review, CMS had not developed or implemented policies and procedures for monitoring contractors’ referral of MSP debts. As a result, CMS did not monitor the extent to which contractors referred specific MSP debts to PSC and did not identify specific contractors, such as those mentioned above, that failed to identify and refer all eligible debts. Without such monitoring, CMS could not take prompt corrective action. This lack of procedures for monitoring contractors and the resulting lack of monitoring are inconsistent with the comptroller general’s Standards for Internal Control in the Federal Government. The standards state that internal controls should be designed to assure that ongoing monitoring occurs in the course of normal operations and that it should be performed continually and ingrained in agency operations. In response to our work, CMS officials stated that in June 2001 they had begun to review selected contractors’ MSP debt referrals. A CMS official said that the 10 CMS regional offices would assume a more active role in ensuring that contractors promptly refer eligible MSP debts to PSC. As of September 2001, CMS had not developed formal written procedures for monitoring contractors, but agency officials stated that they planned to develop such procedures. Many MSP debts will never be referred to PSC because of a CMS decision to close out older MSP debts. In February 2001, CMS issued guidance to its contractors directing them to methodically terminate collection action on or close out MSP debts delinquent more than 6 years and 3 months. CMS officials stated that the agency selected this delinquency criterion because the statute of limitations prevents the Department of Justice from litigating to collect debts more than 6 years after they become delinquent. Also, these debts, because they are closed out, will never be reported to FMS for TOP, which has been FMS’s most effective debt collection tool. For fiscal year 2000, Treasury found that the collection rate for the small amount of MSP debt that had been reported to TOP was about 10.5 percent, which is higher than TOP’s average collection rate. The February 2001 guidance was a continuation of CMS policy set forth in the agency’s instructions to contractors at the start of the MSP pilot project in fiscal year 2000, which authorized contractors to identify and refer only debts up to 6 years old. A CMS official stated that older MSP debts were closed out because it was not cost-effective to collect them. However, CMS could not provide any documentation to support the assertion that it is not cost-effective to attempt to collect older MSP debts, and CMS did not test this assumption in its MSP pilot project. Age alone is not an appropriate criterion for terminating collection action on a debt. The agency should pursue all appropriate means of collection on a debt and determine, based on the results of the collection activity, whether the debt is uncollectible. According to discussions with contractor officials, collection activity prior to the termination of the debts likely involved only the issuance of demand letters, as required by CMS’s Budget and Performance Requirements for contractors. The CMS official said she was not aware of any assessment performed to determine the total dollar amount of debts that will be designated as eligible for close-out because of this age threshold. During our review, CMS had already approved close-out of about $86 million of MSP debts at the contractors we visited. About $85 million of these debts were less than 10 years old and therefore could have been referred to PSC for collection action, including reporting to TOP. In a related matter, CMS has not established a process, including providing authorization to PSC, to report closed-out MSP debts to IRS. The Federal Claims Collection Standards and Office of Management and Budget (OMB) Circular No. A-129 require that agencies, in most cases, report closed-out debt amounts to IRS as income to the debtor, since those amounts represent forgiven debt, which is considered income and therefore may be taxable at the debtor’s current tax rate. Thus, reporting the discharge of indebtedness to IRS may benefit the federal government, through increased income tax collections. CMS stated that agency officials and the CMS Office of General Counsel are discussing the reporting of closed-out MSP debts to IRS but did not specify when actions, if any, would be taken to report such debts to IRS. Even with CMS’s non-MSP debt-referral system operating again and its MSP and non-MSP referral requirements extended to all of its contractors, the agency still faces obstacles to effectively managing its Medicare debt referrals. As mentioned earlier, in fiscal year 2001 CMS expanded debt- referral requirements from the pilot projects to include all 55 Medicare contractors. CMS lacks complete and accurate debt information, however, and this shortcoming will likely hamper the agency’s ability to adequately monitor contractors’ debt referrals. In addition, CMS’s referral instructions to contractors currently do not cover some types of Medicare debts, including MSP liability debts. Without a comprehensive plan in place that covers all types of Medicare debts, CMS faces significant challenges to be able to achieve its goal of referring all eligible Medicare debts by the end of fiscal year 2002. All Medicare contractors are now responsible for identifying eligible debts from their debt portfolio, sending out DCIA intent letters to debtors, and referring eligible debts to PSC. To help ensure that all eligible Medicare debts are promptly identified and referred for collection, CMS must monitor contractors’ debt-referral practices. To monitor effectively, the agency needs comprehensive, reliable debt information from its contractors, but CMS systems currently do not contain complete and accurate information on all CMS Medicare debts. One of CMS’s most daunting financial management challenges continues to be the lack of a financial management system that fully integrates CMS’s accounting systems with those of its Medicare contractors. Because CMS does not have a fully integrated accounting system, each MSP debt is maintained only in the internal system of the specific contractor that holds the debt. CMS has no centralized database that includes all MSP debts held by contractors. As a result, the agency cannot effectively monitor the extent to which its various contractors are promptly identifying eligible MSP debts and referring them to PSC for collection. CMS is developing a system that is to include a database containing all MSP debts. However, the agency plans to phase the system in, and it is not scheduled to be fully implemented at all contractors until the end of fiscal year 2006. CMS has two debt-tracking systems for its non-MSP debts, one for Part A debts and one for Part B debts. Medicare contractors are responsible for entering non-MSP debts into the systems and updating the debts’ status (with respect to bankruptcy, appeals, etc.) as appropriate. According to CMS officials, the agency intends to use these systems to monitor contractors to ensure that they are promptly identifying and referring eligible debts to PSC. Accurate tracking information is critical for monitoring debt-referral practices. CMS found, however, that its non-MSP debt-tracking systems contain inaccurate information because a significant number of contractors have not been adequately updating information in the systems. CMS performed contractor performance evaluations for fiscal year 2000 on 25 contractors and found that 19 were not adequately updating information in the non-MSP debt-tracking systems. For 5 of the 19 contractors, CMS considered the problems to be significant enough to require the contractors to develop written performance improvement plans. Our work at the 2 selected contractors involved in the non-MSP pilot project corroborated CMS’s own findings. CMS periodically sent non-MSP pilot contractors a list of eligible Part A debts from the agency’s debt- tracking system for possible referral to PSC. For the 2 non-MSP contractors we reviewed, CMS selected $1.3 billion of debts from the Part A non-MSP debt-tracking system. The contractors determined that $289 million of the debts, or about 23 percent, were actually ineligible for referral because they were in bankruptcy, under appeal, or under investigation for fraud. In addition, we identified $21 million of debts that 1 of the 2 non-MSP pilot contractors had misclassified on the CMS debt- tracking system as bankruptcy debt and ineligible for referral. These debts had actually been dismissed from the bankruptcy proceedings and therefore should have been reported in the debt-tracking system as eligible for referral. In this case, the contractor had not updated its own internal system for $8 million of the debts and was therefore not pursuing postdismissal collection actions on them. For the remaining $13 million, the contractor had updated its internal system and was pursuing collection but had failed to properly update the CMS debt-tracking system. To effectively monitor contractor performance, CMS must have the ability to determine whether contractors are referring debts promptly. However, CMS’s non-MSP debt-tracking systems lack the capacity to indicate whether contractors are promptly entering non-MSP debts into the debt- referral system after they mail DCIA intent letters because the systems do not track the date of status code changes (e.g., the date when the DCIA letter was issued). We found that CMS’s non-MSP debt-tracking system for Part A debts did not identify $5.2 million of debts that had been pending referral for at least 9 months at one of the two non-MSP contractors that we reviewed. In response to our work, CMS officials stated that they are in the process of modifying the non-MSP debt-tracking systems to allow the agency to monitor how promptly contractors are referring debts in the future. CMS has not developed a comprehensive plan that covers all types of Medicare debt eligible for referral. The agency lacks information on the total dollar amount of eligible debts not covered by its current referral instructions to the Medicare contractors, and it has not developed a detailed plan or specific time frame for referring these debts. Without a comprehensive plan in place, CMS faces significant challenges to be able to achieve its goal of referring 100 percent of eligible debts in fiscal year 2002. Types of debt for which CMS has not yet established a referral plan include, but are not limited to, the following: MSP liability. MSP liability debts arise when Medicare covers expenses related to accidents, malpractice, workers’ compensation, or other items not associated with group health plans that are subsequently determined to be the responsibility of another payer. Part A claims adjustments. Part A claims receivables are created when previously paid claims are adjusted. Reasons for claims adjustments include duplicate processing of charges or claims, payment for items or services not covered by Medicare, and incorrect billing. The CMS debt- tracking system does not track these debts. Debts resulting from claims adjustments are generally offset from subsequent Medicare payments and require no further collection action. Should subsequent Medicare payments be unavailable for offset, however, no requirements exist for Medicare contractors to perform any other collection actions, such as issuing a demand letter. We found that as of September 30, 2000, the four contractors we reviewed held about $9.6 million of MSP liability debts and about $10.7 million of debts related to Part A claims adjustments. CMS officials stated that the agency intends to refer both types of debt to PSC in the future. The amounts of eligible debt CMS reported in the September 30, 2000, Medicare Trust Fund TROR were not reliable. CMS did not properly report the delinquency aging for certain debts, including debts previously transferred to regional offices for collection. CMS also did not properly report its exclusions from referral requirements. For example, the agency inappropriately reported as excluded $149 million of non-MSP debts that had been referred to CMS regional offices for collection.In addition, CMS did not report any exclusion amounts for MSP debts, even though we noted that certain MSP debts were involved in litigation, or for non-MSP debts under investigation for fraud. Finally, because of a data-entry error, CMS inadvertently overstated debt referrals by $67 million. It is imperative that CMS provide Treasury with reliable information on eligible Medicare debt. Treasury uses the information to monitor agencies’ implementation of DCIA. In addition, the TROR is Treasury’s only comprehensive means of periodically collecting data on the status and condition of the federal government’s nontax debt portfolio, as required by the Debt Collection Act of 1982 and DCIA. CMS’s delinquent Medicare debts represent a significant portion of delinquent debts governmentwide. Therefore, they must be reported accurately if governmentwide debt information is to be useful to the president, the Congress, and OMB in determining the direction of federal debt management and credit policy. According to CMS officials, the agency is revising its method for determining eligible debt amounts. For example, CMS officials stated that the agency no longer reports debts referred to regional offices as exclusions and is in the process of identifying and reporting exclusion amounts for MSP debts. Although CMS made progress in referring eligible Medicare debts to PSC in fiscal year 2001 and met its referral goal for the year, a substantial portion of Medicare debts—particularly MSP debts—are still not being promptly referred for collection action. Inadequate contractor monitoring, resulting partly from CMS’s debt system limitations, has contributed to the slow pace of MSP debt referrals. In addition, CMS has not begun referring certain types of eligible Medicare debts, such as MSP liability debts, and those debts will continue to age until CMS completes and implements a comprehensive referral plan. Since recovery rates decrease dramatically as debts age, CMS cannot accomplish DCIA’s purpose of maximizing collection of federal nontax debt unless it refers eligible debts promptly. CMS’s policy of closing out eligible MSP debts solely on the basis of their age, without performing a quantitative study to determine whether collection action would be cost-effective, has also reduced referrals and eliminated opportunities for potential collections on those debts. In addition, by not reporting closed-out debts to IRS, the federal government may be missing an opportunity to increase government receipts. Medicare debts are a significant share of delinquent debt governmentwide, and CMS’s inaccurate reporting to Treasury on exclusion amounts, debt aging, and referrals may distort governmentwide debt information used to determine the direction of federal debt management and credit policy. CMS’s inaccurate reporting of eligible debt amounts also impedes Treasury’s ability to monitor the agency’s compliance with DCIA. To help ensure that CMS promptly refers all eligible delinquent Medicare debts to PSC, as we recommended in September 2000, and that all benefits from closed-out debts are realized, we recommend that the administrator of CMS establish and implement policies and procedures to monitor contractors’ implementation of CMS’s May 2001 instructions to ensure the prompt referral of eligible MSP debts; implement changes to CMS’s non-MSP debt tracking systems so that CMS personnel will be better able to monitor contractors’ referral of eligible non-MSP debts as required by CMS’s April 2001 instructions to contractors; develop and implement a comprehensive referral plan for all eligible delinquent Medicare debts that includes time frames for promptly referring all types of debts, including MSP liability and Part A claims adjustments debts; perform an assessment of MSP debts being closed out because they are more than 6 years and 3 months delinquent to determine whether to pursue collection action on the debts, and document the results of the assessment; establish and implement policies and procedures for reporting closed- out Medicare debts, when appropriate, to IRS; and validate the accuracy of debt-eligible amounts reported in the Medicare Trust Fund TROR by establishing a process that ensures, among other things, (1) accurate reporting of the aging of certain delinquent debts, (2) accurate and complete reporting of debts excluded from referral requirements, and (3) verification of data entry for referral amounts. In written comments on a draft of this report, CMS agreed with five of our six recommendations and summarized actions taken or planned to address those five. CMS expressed confidence that it would attain its goal of referring all eligible debt to Treasury by year-end as part of its overall financial plan. Regarding our recommendation to assess closed-out MSP debts that were more than 6 years and 3 months delinquent to determine whether to pursue collection action on them, CMS stated that further collection efforts would not be cost-effective. According to CMS, medical services at issue in these MSP debts are typically from the early 1990s and often involve Medicare services from the mid- to late 1980s. CMS indicated that the costs of validating the debts and the costs and fees associated with DCIA cross- servicing and TOP were too great to justify additional collection efforts. However, as we stated in the report, CMS could not provide any documentation to support its position that it is not cost-effective to attempt to collect older MSP debts, and CMS did not test this assumption in its MSP pilot project. CMS’s efforts to collect this debt prior to close-out were not adequate. The Federal Claims Collection Standards require that before terminating collection activity, agencies are to pursue all appropriate means of collection and determine, based on the results of the collection activity, that the debt is uncollectible. According to discussions with Medicare contractor officials, the collection activity for many of these MSP debts was limited to issuance of demand letters, which does not satisfy the requirement that all appropriate means of collection action be pursued on debts. In addition, most of the closed-out MSP debts at the Medicare contractors we visited were less than 10 years delinquent and therefore could have been referred to PSC for collection action, including reporting to TOP. As such, we continue to believe that CMS should assess MSP debt to determine whether additional collection activity is appropriate in light of the minimal prior collection activity. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the chairmen and ranking minority members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and to the ranking minority member of your subcommittee. We will also provide copies to the secretary of health and human services, the inspector general of health and human services, the administrator of the Centers for Medicare & Medicaid Services, and the secretary of the treasury. We will then make copies available to others upon request. If you have any questions about this report, please contact me at (202) 512- 3406 or Kenneth Rupar, assistant director, at (214) 777-5600. Additional key contributors to this assignment were Matthew Valenta and Tanisha Stewart. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system).
The Debt Collection Improvement Act (DCIA) of 1996 requires that agencies refer eligible debts delinquent more than 180 days that they have been unable to collect to the Department of the Treasury for payment and offset and to Treasury or a Treasury-designated debt collection center for cross-servicing. The Centers for Medicare and Medicaid Services (CMS) made progress in referring eligible delinquent debts for collection during fiscal year 2001. Much of the referral volume was late in the year, however, and substantial unreferred balances remained at the end of the fiscal year. Inadequate procedures and controls hampered prompt identification and referral of both eligible non-Medicare Secondary Payer (MSP) and MSP debts. The delayed referral of non-MSP debts resulted from problems with the CMS debt-referral system and insufficient CMS monitoring of contractor referrals. The low level of MSP debt referrals resulted primarily from limited contractor efforts and insufficient CMS monitoring of contractor performance. Although GAO did not test whether selected CMS debts had been reasonably excluded from referral and reached no overall conclusion about the appropriateness of CMS exclusions, GAO found that CMS did not report reliable Medicare debt information to the Treasury Department as of September 30, 2000.
The Commission made 14 recommendations in the general area of aviation safety. Foremost among these is establishing a national goal to reduce the fatal accident rate by 80 percent within 10 years. This is a very challenging goal, particularly in the light of the projected increases in the amount of air traffic in the coming decade. We applaud the Commission’s adopting such a goal for accident reduction and endorse many of its recommendations for improving safety. These recommendations include, for example, expanding FAA’s inspection program to cover not only aging aircraft’s structural integrity but also such areas as electrical wiring, fuel lines, and pumps. A number of these recommendations resonate with safety and efficiency improvements that we and others, including FAA, have suggested over the years. However, we believe that, as FAA tries to fundamentally reinvent itself as the Commission contemplates through some of its recommendations, FAA and the aviation industry will be challenged in three areas: (1) FAA’s organizational culture and resource management, (2) FAA’s partnerships with the airline industry, and (3) the costs of and sources of funding to implement the recommendations. A number of recent studies and the FAA itself have pointed to the importance of culture in the agency’s operations. Last year, our review of FAA’s organizational culture found that it had been an underlying cause of the agency’s persistent acquisition problems, including substantial cost overruns, lengthy schedule delays, and shortfalls in the performance of its air traffic control modernization program. Furthermore, the lack of continuity in FAA’s top management, including the Administrator and some senior executive positions, has fostered an organizational culture that has tended to avoid accountability, focus on the short term, and resist fundamental improvements in the acquisitions process. Similarly, a 1996 report issued by the Aviation Foundation and the Institute of Public Policy stated that the recent actions taken to reorganize FAA have done nothing to change the long-term structural problems that plague the organization. The study concluded that FAA does not have the characteristics to learn and that its culture does not recognize or serve any client other than itself. As FAA’s own 1996 report entitled Challenge 2000 points out, it will take several years to overcome the many cultural barriers at FAA, determine the skill mix of the workforce of the 21st century, and recruit the necessary talent in a resource-constrained environment. In the light of these studies’ results, we would caution that the organizational and cultural changes envisioned by the Commission may require years of concerted effort by all parties concerned. In connection with resource management, FAA’s fiscal year 1998 budget request reveals some difficult choices that may have to be made among safety-related programs. For example, FAA proposes increasing its safety inspection workforce by 273 persons while decreasing some programs for airport surface safety, including a program designed to reduce runway incursions. The National Transportation Safety Board has repeatedly included runway incursions on its annual lists of its “most wanted” critical safety recommendations. FAA’s budget request includes a reduction in the Runway Incursion program from $6 million in fiscal year 1997 to less than $3 million in fiscal year 1998. Although FAA set a goal in 1993 to improve surface safety by reducing runway incursions by 80 percent by the year 2000 from the 1990 high of 281, the results have been uneven; there were 186 runway incursions in 1993 and 246 in 1995. As was shown by the November 1994 runway collision in St. Louis, Missouri, between a commercial carrier and a private plane, such incidents can have fatal consequences—2 people lost their lives. It is unclear what progress will be made in this area, given the proposed budget cuts. Similarly, we have reported since 1987 that the availability of complete, accurate, and reliable FAA data is critical to expanding the margin of safety. However, funding for FAA’s National Aviation Safety Data Analysis Center, a facility designed to enhance aviation safety by the rigorous analysis of integrated data from many aviation-related databases, is slated to be reduced from $3.7 million in fiscal year 1997 to $2 million in fiscal year 1998. The Commission’s report stresses that safety improvements cannot depend solely on FAA’s hands-on inspections but must also rely on partnerships with the aviation industry in such areas as self-monitoring and certification. Several programs for the airlines’ self-disclosure of safety problems have already contributed to identifying and resolving some of these types of problems. For example, one airline’s program for reporting pilot events or observations—a joint effort by the airline, the pilot union, and FAA—has identified safety-related problems, the vast majority of which would not have been detected by relying solely on FAA surveillance. The discovery of these problems has resulted in safety improvements to aircraft, to the procedures followed by flight crews, and to air traffic patterns. As the Commission has recognized, however, such information will not be provided if its disclosure threatens jobs or results in punitive actions. However, FAA’s role in some broader partnerships with industry has also raised some questions. For example, FAA’s cooperative process working with Boeing on the 777 aircraft helped enable the manufacturer to meet the planned certification date, but FAA was also criticized by some FAA engineers and inspectors for providing inadequate testing of the aircraft’s design. In the case of self-disclosure programs, decisions will have to be made on which aviation entities are best suited to such partnership programs, how to monitor these programs and make effective use of the data they offer, how to balance the pressure for public disclosure against the need to protect such information, and how to standardize and share such information across the aviation industry. With broader cooperation between FAA and the aviation industry, the Congress and FAA need to be on guard that the movement toward partnerships does not compromise the agency’s principal role as the industry’s regulator. Finally, it is important to point out that the costs associated with achieving the accident reduction goal and who should pay for these costs have not yet been determined. In accordance with the Commission’s call for more government-industry partnerships, government, the industry, and the traveling public would likely share in these costs. For example, FAA’s partnership programs involve significant costs for both the agency and the industry. In the case of equipping the cargo holds of passenger aircraft with smoke detectors, the cost would fall initially on the industry, while the costs associated with the recommendation that children under the age of 2 be required to have their own seats on airplanes would fall more directly on the traveling public. Regardless of who bears the cost of the proposed improvements, the Commission has correctly recognized that additional safety improvements may sometimes be difficult to justify under the benefit-cost criteria applied to regulatory activities. The Commission recommended that cost not always be the determining factor or basis for deciding whether to put new aviation safety and security rules into effect. Specifically, the Commission notes that the potential reduction in the fatal accident rate merits a careful weighing of the options for improving safety in terms of the benefits that go beyond those traditionally considered in benefit-cost analyses. However, we also believe that it is important to recognize that the recommendation (1) represents a significant departure from traditional processes, (2) could result in significant cost increases for relatively modest increases in the safety margin, and (3) could rest on a limited empirical justification. In effect, this recommendation may increase the number of instances in which the primary factor determining whether or not to go forward with a safety or security improvement is what might be referred to as a public policy imperative rather than the result of a benefit-cost analysis. One instance of such a decision is the Commission’s recommendation to eliminate the exemption in the Federal Aviation Regulations that allows children under 2 to travel without the benefit of an FAA-approved restraint. The Commission also reviewed the modernization of the air traffic control (ATC) system. FAA is in the midst of a $34 billion dollar, mission-critical capital investment program to modernize aging ATC equipment. This program includes over 100 projects involving new radars, automated data processing, and navigation, surveillance, and communications equipment. We believe this modernization is also important for attaining the next level of safety by replacing aging equipment and providing controllers and pilots with enhanced communication and better information. Recognizing that new technology, such as satellite-based navigation and new computers in ATC facilities and in aircraft cockpits, offers tremendous advances in safety, efficiency, and cost-effectiveness for users of the ATC system and for FAA, the Commission recommended accelerating the deployment of this new technology. According to FAA’s current plan, many of these elements would not be in place until the year 2012 and beyond. However, the Commission has recommended that these technologies be in place and operational by the year 2005—7 years ahead of FAA’s planned schedule. The Commission’s goal is commendable, but given FAA’s past problems in developing new ATC technology and the technical challenges that lie ahead, there is little evidence that this goal can be achieved. We have chronicled FAA’s efforts to modernize the air traffic control system for the past decade. Because of the modernization effort’s size, complexity, cost, and past problems, we designated it as a high-risk information technology initiative in 1995 and again in 1997. Many of FAA’s modernization projects have been plagued by cost-overruns, schedule delays, and shortfalls in performance that have delayed important safety and efficiency benefits. We reported last year that the agency’s culture was an underlying cause of FAA’s acquisition problems. FAA’s acquisitions were impaired because employees acted in ways that did not reflect a strong commitment to, among other things, the focus on and the accountability to the modernization mission. More recently, we have identified other important factors that have contributed to FAA’s difficulty in modernizing the ATC system. For example, FAA’s lack of effective cost-estimating and -accounting practices forces it to make billion-dollar investment decisions without reliable information. Also, the absence of a complete systems architecture, or overall blueprint, to guide the development and evolution of the many interrelated ATC systems forces FAA to spend time and money to overcome system incompatibilities. We agree with the Commission’s recommendations to integrate the airports’ capacity needs into the ATC modernization effort and to enhance the accuracy, availability, and reliability of the Global Positioning System. However, we have two concerns about accelerating the entire modernization effort that focus on the complexities of the technology and the integrity of FAA’s acquisition process. First, the complexity of developing and acquiring new ATC technology—both hardware and software—must be recognized. The Commission contends that new ATC technology to meet FAA’s requirements is available “off-the-shelf.” However, FAA has found that significant additional development efforts have been needed to meet the agency’s requirements for virtually all major acquisitions over the past decade. More recently, two new major contracts for systems—the Standard Terminal Automation Replacement System and the Wide Area Augmentation System—called for considerable development efforts. Second, requiring FAA to spend at an accelerated rate could prove to be inconsistent with the principles of the agency’s new Acquisition Management System, established on April 1, 1996, in response to the legislation freeing it from most federal procurement laws and regulations. FAA’s acquisition management system calls for FAA to go through a disciplined process of (1) defining its mission needs, (2) analyzing alternative technological and operational approaches to meeting those needs, and (3) selecting only the most cost-effective solutions. Until FAA goes through this analytical and decision-making process, it is premature to predict what new technology FAA should acquire. For example, FAA itself points out that while satellite communications that link the communication and navigation functions offer tremendous potential benefits, the technology is not yet mature enough for civil aviation—significant development is needed to determine the requirements and operational concepts of the technology. In this particular case, accelerating the ATC modernization too much could increase the risk that FAA will make poor investment decisions. Overall, our message in this area is one of caution—accelerating the entire modernization effort will have to overcome a long history of problems that FAA’s new acquisition management system was designed to address and a number of obstacles. Aviation security is another component of ensuring the safety of passengers. It rests on a careful mix of intelligence information, procedures, technology, and security personnel. The Commission strongly presented aviation security as a national security priority and recommended that the federal government commit greater resources to improving it. Many of the Commission’s 31 recommendations on security are similar to those that we have made in previous reports. For example, the Commission urged FAA to deploy commercially available systems for detecting explosives in checked baggage at U.S. airports while also continuing to develop, evaluate, and certify such equipment. Similarly, the Commission echoed our recommendation that the government and the industry focus their safety and security research on the human factors associated with using new devices, especially on how operators will work with new technology. The Committee’s recommendations address a number of long-standing vulnerabilities in the nation’s air transportation system, such as (1) the screening of checked and carry-on baggage, mail, and cargo and (2) unauthorized individuals gaining access to an airport’s critical areas. Many of the 20 initial security recommendations that the Commission made on September 9, 1996, are already being implemented by the airlines or by government agencies. We found, however, that in the past FAA has had difficulty in meeting some of the time frames for implementing the safety improvements recommended by GAO and the Department of Transportation (DOT) Inspector General. Similarly, in the security area, FAA has also had problems meeting the implementation time frames. For example, FAA is just beginning to purchase explosives-detection systems to deploy at U.S. airports, although the Aviation Security Improvement Act of 1990 set an ambitious goal for FAA to have such equipment in place by November 1993. This delay was due primarily to the technical problems slowing the development and approval of the explosives-detection devices. But we also found that FAA did not develop an implementation strategy to set milestones and realistic expectations or to identify the resources to guide the implementation efforts. It is important that FAA sustain the momentum generated by the Commission’s report and move forward systematically to implement its recommendations. Finally, although the Commission concluded that many of its proposals will require additional funding, it did not specifically recommend funding levels for new security initiatives over the long term. Instead, the Commission recommended that the federal government devote at least $100 million annually to meet security capital requirements—leaving the decision on how to fund the remaining security costs to the National Civil Aviation Review Commission. The National Civil Aviation Review Commission is charged with looking at FAA funding issues, and we do not want to preempt its report and recommendations. But, for example, the $144.2 million appropriated by the Congress in 1997 for new security technology represents a fraction of the estimated billions of dollars required to enhance the security of air travel. To improve aviation security, the Congress, the administration, and the aviation industry need to agree on what to do and who will pay for it—and then to take action. In closing, Mr. Chairman, we face a turning point. The public’s concern about aviation safety and security has been heightened. The Congress and the administration have a renewed commitment to addressing this urgent national concern. The Commission’s work is a good start toward an evolutionary process of reaching agreement on the goals and steps to improve aviation safety and security. To guide the implementation of the Commission’s recommendations, DOT and FAA will need a comprehensive strategy that includes (1) clear goals and objectives, (2) measurable performance criteria to assess how the goals and objectives are being met, and (3) a monitoring, evaluation, and reporting system to periodically evaluate the implementation. This strategy could serve as a mechanism to track progress and establish the basis for determining funding trade-offs and priorities. In addition, successful implementation will require strong, stable leadership at DOT and at FAA. Although several complex questions remain unanswered, we hope that the Commission’s work can serve as a catalyst for change and a strengthened commitment to resolving these challenges to improving safety. This concludes my prepared statement. We would be glad to respond to any questions that you and Members of the Committee might have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed recommendations contained in the recently released report of the White House Commission on Aviation Safety and Security, focusing on the implementation issues relating to three areas addressed by the Commission: (1) aviation safety; (2) air traffic control (ATC) modernization; and (3) aviation security. GAO noted that: (1) foremost among the Commission's 14 recommendations for aviation safety is establishing a national goal to reduce the fatal accident rate by 80 percent within 10 years; (2) however, GAO believes that, as the Federal Aviation Administration (FAA) tries to fundamentally reinvent itself as the Commission contemplates through some of its recommendations, FAA and the aviation industry will be challenged by: (a) FAA's organizational culture and resource management; (b) FAA's partnerships with the airline industry; and (c) the costs of and sources of funding to implement the recommendations; (3) recognizing that new technology offers tremendous advances in safety, efficiency, and cost-effectiveness for users of the ATC system and for FAA, the Commission recommended accelerating FAA's deployment of new technology, but given FAA's past problems in developing new ATC technology and the technical challenges that lie ahead, there is little evidence that this goal can be achieved; (4) GAO agrees with the Commission's recommendations to integrate the airports' capacity needs into the ATC modernization effort and to enhance the accuracy, availability, and reliability of the Global Positioning System; however, GAO has two concerns about accelerating the entire modernization effort that focus on the complexities of the technology and the integrity of FAA's acquisition process; (5) the Commission strongly presented aviation security as a national security priority and recommended that the federal government commit greater resources to improving it; (6) in the past, FAA has had difficulty in meeting some of the time frames for implementing safety and security improvement recommendations; and (7) to improve aviation security, the Congress, the administration, and the aviation industry need to agree on what to do and who will pay for it, and then take action.
Marketplace lending connects consumers and small businesses seeking online and timelier access to credit with individuals and institutions seeking investment opportunities. Marketplace lenders use traditional and may use less traditional types of data and credit algorithms to assess creditworthiness and underwrite consumer loans, small business loans, lines of credit, and other loan products. The marketplace lending subsector originated as person-to-person lending where individual investors financed loans to consumers. The investor base for online marketplace lenders has expanded to include institutional investors such as hedge funds and financial institutions. Additionally, there has been the emergence of a market for securitizations of marketplace lending loans—both consumer and small business loan- backed offerings. Marketplace lending firms have evolved to offer a wide variety of loan products and services to consumers and small businesses and have recently begun to offer mortgages, life insurance, and auto loans. Although a number of marketplace lending models exist, publications we reviewed highlighted two common models: direct lenders and platform lenders. Direct lenders, also known as balance sheet lenders, use capital obtained from outside sources to fund loans and often hold loans on their balance sheet. Examples of direct lenders include CAN Capital, Kabbage, and SoFi. Platform lenders partner with depository institutions to originate loans that are then purchased by the lender or by an investor through the platform. Examples of platform lenders include LendingClub Corporation, Prosper, and Upstart. However, there are various permutations based on these two common models. For example, direct lenders like OnDeck have developed hybrid models, selling some whole loans to institutional investors while retaining servicing responsibilities. The marketplace lending process for the two models typically begins with a prospective borrower filling out an online application on the marketplace lending platform’s website. Marketplace lenders use traditional and may use less traditional types of data and credit algorithms to assess creditworthiness and underwrite loans. Marketplace lenders use traditional credit data (e.g., credit scores, income, and debt repayment history) but, according to publications we reviewed, may also use less traditional data such as monthly cash flow and expenses, educational history, payment and sales history, and online customer reviews. After assessing the creditworthiness and needs of the applicant, the marketplace lender will approve or deny the borrower’s loan request. Generally, the loan will include a principal amount, an interest amount, and the marketplace lender may charge a servicing fee for collecting and transmitting payments and handling collections in case of a default. Funding a borrower’s request depends on the business model of the marketplace lender. Direct lenders typically originate the loan, hold most or all of the loans on their own balance sheets, earn interest on the loans, and carry credit risk for the entire loan (the risk is that the borrower does not repay), see figure 1. These lenders can raise funds to make loans by issuing equity to institutional investors (in addition to other means). Platform lenders match investors (institutional or individual) to loans that a depository institution, such as a bank, originates (see fig. 2). If the loan is made and transferred to investors, the platform lender services the account. Investors have the option of either partially or fully funding a loan. Consumers: can use term loans from marketplace lenders to cover personal expenses (such as home or medical expenses); consolidate debt; or refinance student loans, among other reasons. According to Treasury, three marketplace lenders offer consumer loans ranging from $1,000 to $40,000. Treasury also indicated that marketplace lending firms generally provide consumer loans to prime and near-prime borrowers although some marketplace lending firms target subprime borrowers or applicants without credit scores or with a limited credit history. Small Businesses: can use short and fixed-term loans, lines of credit, and merchant cash advances from marketplace lenders, among other products and services, to finance business expenses and expansions, among other reasons. According to a Federal Reserve Bank of Cleveland publication, limited data are available about the types of small businesses that use online lenders, why they have chosen to apply, how successful they are in obtaining funds, and how satisfied they are with their experiences as borrowers. Lower costs: Marketplace lenders’ online structure may reduce overhead costs because not all firms have brick-and-mortar locations. In addition, the algorithms used by marketplace lenders to underwrite credit decisions may result in lower underwriting costs when compared to banks’ underwriting costs. Expanded access to credit: Marketplace lending may expand credit access to underserved populations that may not meet traditional lending requirements or that seek smaller loans than those that banks traditionally offer. Faster service: According to Treasury, marketplace lenders can provide funding decisions within 48 to 72 hours from when applications are submitted. According to an SBA Office of Advocacy publication, LendingClub Corporation advertises that potential applicants can receive a quote within minutes and that its approval and funding process typically takes 7 days, Kabbage Inc. can provide same-day approval for small business loans, and OnDeck can provide funding within 24 hours. According to representatives from one industry organization we spoke with, faster service is beneficial to small businesses that may need quick access to credit in an emergency, such as a restaurant that needs its oven or refrigerator repaired to continue operations. Payment term transparency: Marketplace lending firms offer various loan types and terms, particularly for small business loans. It can be difficult for small businesses to understand and compare loan terms such as the total cost of capital or the annual percentage rate. According to a Federal Reserve 2015 survey, one reason for small business lenders’ dissatisfaction with online lenders was a lack of transparency. Small business borrower protections: Current federal laws and regulations applicable to marketplace lending generally apply to consumer loans and not small business loans or other commercial loans. For example, the Truth in Lending Act, which among other things, requires the lender to show the cost and terms to the borrower, applies to consumer loans but generally not small business loans. According to Treasury, small business loans under $100,000 share common characteristics with consumer loans, yet do not receive the same protections. However, the report also notes that small business loans may receive protection under the enforcement of fair lending laws under the Equal Credit Opportunity Act. Use of less traditional data in credit decisions: Unlike traditional lending companies that look at a person’s credit reports (which include reported installment credit and revolving credit), publications we reviewed indicate that some marketplace lenders also take into account or have considered using less traditional data (e.g., utilities, rent, telephone bills, educational history) during the underwriting process. However, according to Treasury, data-driven algorithms used by marketplace lenders carry the risk for potential fair lending violations. According to staff from FTC, marketplace lenders must ensure that their practices meet fair lending and credit reporting laws. The use of less traditional data also introduces the risk that the data used are inaccurate and concerns that consumers may not have sufficient recourse if the information being used is incorrect. Uncertainty about performance in full credit cycle: According to publications we reviewed, the marketplace lending subsector experienced considerable growth following the 2007-2009 economic downturn in an environment with tightened lending standards and low interest rates. In addition, little is known about how the industry will perform in other economic conditions such as a recession, which could lead to delinquency and defaults of marketplace loans. According to the Congressional Research Service (CRS), it is also possible that loan servicing could be disrupted in the event the marketplace lender goes out of business. Partnerships: According to Treasury, some marketplace lenders have sought partnerships with traditional banks and community development financial institutions (CDFI) in various models. According to a CRS report, in a white label partnership, a traditional bank sets underwriting standards, originates the loan, and holds the loan once issued. The bank can integrate a marketplace lending firm’s technology services to originate the loan. For example, JPMorgan Chase & Co. partnered with OnDeck to offer small business loans to JPMorgan Chase & Co. customers. In referral partnerships, banks refer customers who do not meet a bank’s underwriting standards, or who are seeking products the bank does not offer, to a marketplace lender. In turn, the bank may collect a fee from the marketplace lender. Referrals may also allow CDFIs to reach customers that may otherwise not be served. For example, in 2015, Regions Bank, Fundation Group LLC (an online small business marketplace lender), and TruFund (a CDFI) partnered to provide small loans to underserved small businesses. Self-regulatory efforts: A number of self-regulatory marketplace lending efforts were established with the intent of developing responsible innovation and mitigating and reporting risks to potential borrowers seeking marketplace lending products. However, limited information is available on the impact of these efforts. Four examples are discussed below. The Marketplace Lending Association (MLA) was established in April 2016 to represent the marketplace lending industry. MLA states that one of its goals is to support responsible growth in the marketplace lending sector. The Online Lenders Alliance represents firms offering loans online. The Alliance provides resources including a consumer hotline, a portal to report fraud, and consumer tips. In 2016, three small business lending platforms formed the Innovative Lending Platform Association. The Association developed the Straightforward Metrics Around Rate and Total cost (SMART) Box tool to help small businesses understand and assess the cost of their small business finance options. For example, some metrics described in the SMART Box tool include total cost of capital, annual percentage rate calculations, and average monthly payment amounts. Its goal is to include clear and consistent pricing metrics, metric calculations, and metric explanations to help small businesses understand and assess the costs of their small business finance options. In 2015, the Responsible Business Lending Coalition launched the Small Business Borrowers Bill of Rights to foster greater transparency and accountability across the small business lending sector. The regulation of marketplace lenders is largely determined by the lenders’ business model and the borrower or loan type. For example, marketplace lenders that provide services through an arrangement with a federally regulated depository institution may be subject to examination as a third-party service provider by the federal prudential regulator. The federal prudential regulators have provided third-party guidance or vendor risk management guidance to depository institutions that describes the risk assessment, due diligence and risk monitoring, and oversight that depository institutions should engage in when they deal with third parties, including marketplace lenders. Depending on the facts and circumstances, including the type of activities being performed, marketplace lenders may be subject to federal consumer protection laws enforced by CFPB and FTC. Also, CFPB and FTC maintain databases of consumer complaints. In March 2016, CFPB announced it would begin accepting consumer complaints about marketplace lenders. However, according to CFPB staff, CFPB’s complaint system does not specifically categorize complaints for marketplace lending because consumers may not know whether to categorize those services as such. FTC encourages consumers to file a complaint if they believe they have been the victim of fraud, identity theft, or other unfair or deceptive business practices. According to FTC staff, fintech is not a category within FTC’s consumer complaint database and marketplace lending complaints are generally categorized as consumer loan complaints. As previously discussed, certain regulations generally apply to consumer loans but may not apply to small business loans or other commercial loans. However, FTC has authority under Section 5 of the Federal Trade Commission Act to protect, among others, small businesses that are consumers of marketplace lending products or services from unfair or deceptive business acts or practices. At the federal level, we previously noted that SEC regulates the offer and sale of securities to investors through disclosure requirements and antifraud provisions that can be used to hold companies liable for providing false or misleading information to investors. The Securities Act of 1933 generally requires issuers that make a public offering of securities to register the offer and sale of their securities with SEC and provide investors with disclosures that include information about the company issuing securities such as risk factors and financial information.According to staff from SEC, certain transactions by marketplace lenders may be exempt from the registration requirements of the Securities Act of 1933 depending on the particular facts of their securities offerings. At the state level, state securities regulators are generally responsible for registering certain securities products and, along with SEC, investigating securities fraud. Table 1 provides examples of federal laws and regulations relevant to marketplace lending. Marketplace lenders are subject to state-level laws in each state in which they are licensed to conduct business. Specifically, some marketplace lenders that originate loans directly to consumers or businesses (e.g., a direct marketplace lender) are generally required to obtain licenses and register in each state in which they provide lending services. According to officials from CSBS, state regulators then have the ability to supervise these lenders, ensuring that the lender is complying with state and federal lending laws. CSBS officials noted that the states leverage the Nationwide Multistate Licensing System (NMLS) to facilitate compliance with state-by-state licensing mechanisms. NMLS is intended to enable firms to complete one record to apply for state licensing that fulfills the requirements of each state, for states that participate in the system. Some agencies have taken a number of steps to understand and monitor the fintech industry, including the marketplace lending subsector. For example, in May 2016, Treasury issued a whitepaper on marketplace lending. In November 2016, SEC hosted a fintech forum where industry representatives and regulators discussed capital formation (including marketplace lending and crowdfunding) and related investor protections. On December 2, 2016, the Comptroller of the Currency announced intent to make special-purpose national bank charters available to fintech companies, such as marketplace lenders. OCC published a paper discussing issues related to chartering special-purpose national banks and solicited public comment to help inform its path moving forward. OCC plans to evaluate prospective applicants’ reasonable chance of success, appropriate risk management, effective consumer protection, fair treatment and access, and capital and liquidity position. Mobile payments allow consumers to use their smartphones or other mobile devices to make purchases and transfer money. Consumers and businesses use these devices to make and receive payments instead of relying on the physical use of cash, checks, or credit and debit cards. According to publications we reviewed, there are different ways to make mobile payments, including the use of a mobile wallet. Mobile wallets are electronic versions of consumers’ wallets that offer consumers the convenience of faster transactions without having to enter credit or debit card information for each transaction. Using a mobile wallet, consumers can store payment card information and other information on their mobile devices that is often needed to complete a payment for later use. Generally, mobile wallets replace sensitive information with randomly- generated numbers—a process called tokenization—that provides greater security when making a payment, and then transmit this information using existing credit and debit card networks. A variety of companies provide mobile wallets, including Apple, Google, and Samsung; merchants such as Starbucks, Walmart, and CVS; and financial institutions such as JPMorgan Chase & Co. and Citibank. Consumers may use mobile wallets to make payments to other consumers, referred to as person-to-person (P2P) payments, or to businesses, referred to as person-to-business (P2B) payments, either in mobile applications, through mobile browsers, or in person at a store’s point-of-sale terminal. In addition, other providers, such as Paypal or Venmo, allow individuals to create accounts to receive and make payments. P2P payments: Consumers can transfer value from a bank account (checking or savings), stored funds in a mobile wallet, credit/debit card, or prepaid card to another consumer’s account. P2P methods use the Internet, mobile applications, or text messages and generally move funds through the automated clearing house (ACH) network or debit and credit card networks. A variety of fintech firms provide P2P services. For example, current P2P providers include PayPal, Venmo, and Google; social networks such as Facebook and Snapchat; and financial institutions such as Bank of America Corporation and JPMorgan Chase & Co. P2B payments: Consumers can also use their mobile devices to make payments to businesses in stores or on their mobile device. In stores, consumers can use mobile wallets to pay a business for goods or services at compatible point-of-sale terminals. These transactions rely on various technologies to transfer payment data between the consumer’s mobile device and the business, including quick response (QR) codes and wireless communication technologies that enable the payment information to be transferred by allowing compatible devices to exchange data when placed in very close proximity to each other (see fig. 3). The Federal Reserve’s 2016 report on Consumers and Mobile Financial Services found that of those with a mobile phone in 2015, 30 percent of individuals ages 18 to 29 and 32 percent of individuals ages 30 to 44 made mobile payments. By comparison, 13 percent of those ages 60 or over made a mobile payment (see fig. 4). From 2011 to 2014, the same general trend was true: younger adults were more likely to make a mobile payment than older age groups. However, the survey results are not comparable because the definition of mobile payments was revised for the 2015 survey. According to a survey by the Pew Charitable Trusts of over 2,000 consumers, 46 percent of the U.S. population reported having made a mobile payment. Specifically, 39 percent of mobile payments users were millennials and 33 percent were between the ages of 35 and 50 compared to 29 percent of users over the age of 50. Underbanked: FDIC and the Federal Reserve have found that underbanked consumers use mobile financial services. According to a 2015 survey by FDIC, 20 percent of households in the United States were underbanked, meaning that the household had an account at an insured institution but also obtained financial services and products outside of the banking system. According to qualitative research conducted by FDIC in 2016, underbanked consumers stated that they used P2P payments and a variety of financial products to manage their day-to-day finances. The Federal Reserve’s 2015 survey indicated that a higher percentage of underbanked consumers used mobile payments than fully banked respondents (34 percent versus 20 percent). Convenience and efficiency: According to publications we reviewed, mobile wallets offer consumers the convenience of instant transactions without having to enter credit card information, PIN numbers, and shipping addresses each time they make a purchase. Mobile wallets can also streamline the checkout time. For example, consumers can wave their smartphone in front of an in-store terminal to make a purchase, which can be faster than swiping a credit or debit card. Data security: Mobile payments can be protected by various security mechanisms, such as codes that must be entered to access a mobile device. According to publications we reviewed, mobile wallets may also improve data security by replacing a consumer’s payment card information with a randomly generated number, or token. Mobile payments can use this token to transact with a merchant, which better protects consumer account credentials. Many of the potential risks associated with mobile payments are the same as those that exist with traditional payment products. Some examples of those risks are discussed below. Data security: Data security risks include the possibility of payment and personal data being lost or vulnerable to theft because of consumers’ reliance on the use of smartphones or other mobile communication devices. According to the Federal Reserve’s 2015 survey, respondents identified concerns about the security of the technology as one of the main reasons they do not use mobile payments. Security concerns include the event of a smartphone being hacked, the loss or theft of a smartphone, or if a company does not sufficiently protect mobile transactions, among other concerns. Human error and confusion: According to publications we reviewed, mobile payment methods can create operational risk for human error. For example, consumers can deposit or send money to the wrong person when using P2P payments (e.g., if they type in the wrong phone number). Mobile payment methods can also increase consumer confusion regarding protections based on the underlying funding source. According to FDIC, consumers may not understand which regulators supervise the parties providing mobile payments and may be unsure which consumer protections apply. Mobile Payment Activities: According to the Federal Reserve’s 2015 survey, the three most common mobile payment activities among mobile payments users with smartphones were paying bills through a mobile phone web browser or app (65 percent), purchasing a physical item or digital content remotely using a mobile phone (42 percent), and paying for something in-store using a mobile phone (33 percent). Partnerships: Some industry stakeholders we spoke with said that the relationship between banks and mobile payment firms has changed to more partnerships because banks and mobile payment firms recognize mutual benefits. For example, mobile payment firms can benefit from banks’ experience with regulatory compliance and banks can remain competitive by meeting the needs of their customers. The regulatory and oversight framework for mobile payments consists of a variety of federal and state regulation and oversight. Determining which laws apply to mobile payments depends on several factors, including agency jurisdiction, mobile payment providers’ relationship to depository institutions, and the type of account used by a consumer to make a mobile payment. Three of the federal prudential regulators—Federal Reserve, FDIC, and OCC—are authorized to examine and regulate the provision of certain services provided by mobile payment providers for federally insured banks and thrifts. For example, these regulators can examine mobile payment providers that are considered third-party service providers of a regulated depository institution if the payment provider offers services to customers on behalf of a depository institution. The federal prudential regulators can also take enforcement actions against mobile payment providers if the provider is an institution-affiliated party of the bank. CFPB has consumer protection authority over certain nonbank institutions and enforcement jurisdiction over entities that offer or provide consumer financial products or services. In October 2016, CFPB issued a final rule to add prepaid cards and some of the payment services that fintech providers are offering, such as PayPal, to the definition of accounts covered under regulations applicable to electronic fund transfer systems such as automated teller machine transfers, telephone bill-payment services, point-of-sale terminal transfers in stores, and preauthorized transfers from or to a consumer’s account (such as direct deposit and Social Security payments). According to CFPB staff, the rule is aimed at providing wide-ranging protections to consumers holding prepaid accounts. Although this rule largely focuses on prepaid cards, the protections also extend to P2P payments and certain mobile wallets that can store funds. Nonbank providers of financial products and services, including mobile payment providers and prepaid card providers, may be subject to FTC consumer protection enforcement actions. According to FTC staff, FTC has brought and settled enforcement actions alleging unfair or deceptive conduct by wireless providers providing mobile payment services. Finally, at the federal level, the Federal Communications Commission (FCC) has jurisdiction over wireless providers, which provide the devices used for mobile payments or sometimes collect such payments through their customers’ billing statements. According to FDIC, to date, no federal laws and regulations specifically govern mobile payments. However, to the extent a mobile payment uses an existing payment method, the laws and regulations that apply to that method also apply to the mobile payment. Table 2 provides examples of federal laws and regulations relevant to mobile payment transactions. State regulators also have authority to regulate mobile payment providers. For example, most states have licensing and regulatory authority over money service businesses that provide money transfer services or payment instruments, which can include mobile payment providers. For example, fintech firms such as PayPal and Google Wallet are subject to state money transmitter laws. State regulators have made efforts to make the state licensing process less burdensome by conducting multistate exams and using NMLS to facilitate these processes. According to interviews with some agencies, they formed working groups to monitor and understand mobile payments. These examples are listed below. In January 2010, the Federal Reserve started the Mobile Payments Industry Working Group to facilitate discussions as to how a successful mobile payments (as opposed to mobile banking) system could evolve in the United States. The working group meets several times annually to share information and ideas. In addition, the Federal Reserve established a multidisciplinary working group focused on analyzing potential innovation in fintech including payments. FDIC established a formal FinTech Steering Committee and two working groups, one focus of one of the working groups includes mobile payments. CFPB met with payment innovators through its Project Catalyst. CSBS formed an Emerging Payments and Innovation Task Force in 2013 to study changes in payment systems to determine the potential impact on consumer protection, state law, and banks and nonbank entities chartered or licensed by the states. Digital wealth management platforms, including robo-advisors, use algorithms based on consumers’ data and risk preferences to provide digital services, including investment and financial advice, directly to consumers. Digital wealth management platforms provide services including portfolio selection, asset allocation, banking and account aggregation, and online risk assessments. According to data from SEC, there were over 12,000 SEC-registered investment advisers in 2016. However, according to staff from SEC, because digital wealth management firms register as investment advisers and are not all separately counted or categorized, the total number of these entities is not known. Digital wealth management firms incorporate technologies into their portfolio management platforms primarily through the use of algorithms designed to optimize wealth management services. Fully automated platforms have features that let investors manage their portfolios without direct human interaction. Examples of current digital wealth management firms include Betterment, Wealthfront, Personal Capital, BlackRock’s Future Advisor, and Acorns. Publications we reviewed indicate that digital wealth management platforms typically collect information on customers and their financial history using online questionnaires. These questionnaires may cover topics such as the customer’s age, income, investment horizon, risk tolerance, and expected returns, among other information. Digital wealth management platforms allow customers with a need to connect multiple accounts—often across multiple providers—to create a holistic picture of their wealth and more easily manage their finances across multiple asset classes and firms. Digital wealth management platforms use the information inputted by the customer to help the customer select a risk profile. The firms then use algorithms to generate a suggested investment strategy to the customer based on that risk profile. Platforms can automatically rebalance customers’ portfolios in response to the performance of the underlying investments, and the customers’ goals (see fig. 5). Adviser-assisted digital wealth management platforms combine a digital client portal and investment automation with a virtual financial adviser typically conducting simple financial planning and periodic reviews over the phone. Examples of current platforms in this category include Personal Capital, Future Advisor, and LearnVest. To further differentiate themselves, they may offer value-added services like asset aggregation capabilities that enable the provision of more holistic advice than fully automated digital wealth managers, based on a comprehensive view of client assets and liabilities, as well as expense-tracking and advice on budgeting and financial-goal planning. Increased access to wealth management services: Publications we reviewed indicated that digital wealth management platforms may expand access to underserved segments such as customers with smaller asset amounts than those of traditional consumers of wealth management services. For example, some platforms may not require customers to maintain minimum balance amounts. Traditional firms may require minimum investment amounts of $250,000, whereas some digital platforms require a minimum of approximately $500 or no minimum at all. Convenience: Regardless of location or the time of day, investors with a smart phone, tablet, or computer can make changes to their data and preference inputs, send instructions, access their portfolios, and receive updated digital advice. Lower fees: According to publications we reviewed, digital wealth management platforms may charge lower fees for services such as investment trade fees than traditional wealth management firms. Some of the potential risks associated with digital wealth management platforms may be similar to those that exist with traditional wealth management services. Examples of those risks are discussed below. Insufficient or incomplete information from customers: According to publications we reviewed, some digital wealth management platforms generate investment outputs based on information provided by the client from questionnaire responses. A traditional wealth manager is able to ask and clarify questions and request follow-up information to capture a customer’s full finances and goals. However, automated responses may not allow the platform to capture a full picture of the customer’s circumstances or short-term goals, for example, whether the customer may need investment money to buy a new home. If the customer does not understand a question, or does not answer it completely, the platform may not assess customers’ full financial circumstances; for example, if a customer provides conflicting information on his or her finances, the digital wealth management platform may not have a full picture of the client’s financial condition or a customer may end up with an undesired portfolio. Inaccurate or inappropriate assumptions: Staff of SEC’s Office of Investor Education and Advocacy (OIEA) and FINRA issued an investor alert on May 8, 2015, which cautioned that assumptions that underlie the algorithms used by digital wealth management firms could be incorrect. For example, the alert states that the platform may be programmed to use economic assumptions that will not react to shifts in the market. Specifically, if the platform assumes that interest rates will remain low but interest rates rise instead, the platform’s output will be flawed, which could adversely affect investors. Consumer Data Protection: To use digital wealth management platforms customers must enter personal information. According to an investor alert issued by SEC and FINRA staff, digital wealth management platforms may be collecting and sharing personal information for purposes unrelated to the platform. The alert cautions customers to safeguard personal information. According to publications we reviewed, fintech firms, including at least one digital wealth management platform, are using or have considered using innovative technologies such as machine learning and artificial intelligence. For example, one platform is intended to track consumers’ financial account activity and apply user behavior to the advice it delivers. Hybrid services have evolved that combine traditional wealth management and digital wealth management. For example, in 2015 Vanguard implemented a service that offers investors an option of consulting with a human advisory representative in addition to its automated investment platform. Traditional wealth management firms also offer digital wealth management services. For example, in 2015, Charles Schwab developed Intelligent Portfolios, available to customers with $5,000 in savings, and Deutsche Bank launched a robo-advisor within its online investment platform. SEC regulates investment advisers, which generally includes firms that provide digital wealth management platforms. Other federal and state agencies have a role with respect to oversight of digital wealth management firms, depending upon the services a digital wealth management platform provides. SEC and state securities regulators share responsibility for the oversight of investment advisers in accordance with the Investment Advisers Act of 1940 (Advisers Act). SEC subjects digital wealth management firms to the same regulations as traditional investment advisers and requires digital wealth management firms that manage over $110 million in assets to register as investment advisers. The Advisers Act generally requires anyone in the business of receiving compensation for providing investment advice to others regarding securities to register with SEC or one or more states. SEC’s supervision of investment advisers includes evaluating their compliance with federal securities laws by conducting examinations, including reviewing disclosures made to customers. It also investigates and imposes sanctions for violations of securities laws. SEC held a forum in November 2016 that discussed fintech innovation in the financial services industry, including the impact of recent innovation in investment advisory services, which includes digital wealth management. In January 2017, SEC’s Office of Compliance Inspections and Examinations announced that electronic investment advice is a 2017 examination priority. In February 2017, SEC’s Division of Investment Management issued guidance for robo-advisers that provide services directly to clients over the Internet. SEC’s Office of Investor Education and Advocacy issued an Investor Bulletin that provided information to help investors using robo-advisers to make informed decisions in meeting their investment goals. State securities regulators generally have registration and oversight responsibilities for investment adviser firms that manage less than $100 million in client assets, if they are not registered with SEC. According to staff from SEC, state securities regulators can bring enforcement actions against firms with assets of any amount for violations of state fraud laws. For example, the state of Massachusetts’ Securities Division issued a policy in April 2016 stating that fully automated robo-advisers may be inherently unable to carry out the fiduciary obligations of a Massachusetts state-registered investment adviser. The policy states that until regulators have determined the proper regulatory framework for automated investment advice, robo-advisers seeking state registration will be evaluated on a case-by-case basis. FINRA, a self-regulatory organization, is also responsible for regulating broker-dealers doing business with the public in the United States. Broker-dealers can use digital investment advice tools to provide investment services to clients. According to FINRA staff, FINRA may test the use of digital wealth management technologies by broker-dealers as part of its examinations. According to FINRA staff, FINRA has taken one enforcement action against a broker-dealer offering clients robo- adviser-like functionality. In March 2016, FINRA issued a report to share effective practices related to digital investment advice tools and remind FINRA-registered broker-dealers of their obligations under FINRA rules, including that broker-dealers are required to supervise the types of businesses in which they engage. CFTC has oversight authority with respect to commodity trading advisers under the Commodity Exchange Act. According to CFTC officials, digital wealth management firms that meet the statutory definition of a commodity trading adviser would be subject to the same oversight and compliance obligations as other traditional commodity trading advisers. The act generally requires that commodity trading advisers register with CFTC. Digital wealth management firms are subject to consumer protection laws that are enforced by FTC. FTC is charged with protecting consumers against unfair or deceptive acts or practices in commerce. According to FTC staff, FTC enforces applicable consumer protection laws in regard to fintech services, such as digital wealth management, just as it applies those laws to other products and services. According to staff from CFPB, certain aspects of digital wealth management such as data aggregation, credit, or linked deposit accounts may also be subject to consumer oversight authority by CFPB. In April 2016, the Department of Labor (DOL) adopted a regulation that would expand the circumstances in which those who provide retirement investment advice, including digital wealth management firms, would have to abide by a “fiduciary” standard, acting prudently and in the best interest of their clients. The rule was scheduled to be applicable in April 2017. However, the President issued a memorandum on February 3, 2017, that directed the Secretary of DOL to examine the fiduciary duty rule to determine whether it may adversely affect the ability of Americans to gain access to retirement information and financial advice. In April 2017, DOL extended the applicability date for an extra 60 days. Distributed ledger technology (DLT) was introduced in 2009 as a technology intended to facilitate the recording and transferring of bitcoin, a virtual currency, specifically using blockchain. DLT has the potential to be a secured way of conducting transfers of digital assets in a near real-time basis potentially without the need for an intermediary. DLT is a generic technology for a distributed database, while blockchain is one type of DLT. According to one study we reviewed, DLT involves a distributed database maintained over a network of computers connected on a peer-to-peer basis, such that network participants can share and retain identical, cryptographically secured records in a decentralized manner. A network can consist of individuals, businesses, or financial entities. One type of DLT is blockchain, which is a shared ledger that records transactions in a peer-to-peer network. Blockchain is a series of digital blocks of information (transactions) that are chained together. The party initiating a transaction sends a message represented as a block to a network of participants that can include financial institutions, financial market participants, and regulators. For a transaction to be included, network participants must validate the transaction. Once a transaction has been confirmed, details of the transaction are recorded on the blockchain that can be visible to network participants (see fig. 6). DLT solutions can have different types of access control. For example, there may be “permissionless” (public) ledgers that are open to everyone to contribute data to the ledger and cannot be owned; or “permissioned” (private) ledgers that may have one or many owners and only they can add records and verify the contents of the ledger. According to one study, permissioned DLT is not fully decentralized. According to publications we reviewed, an important feature of blockchain is that transactions added to a ledger are validated by network participants. This validation process is referred to as a consensus mechanism. Consensus mechanisms can help prevent the problem of double spending. Publications we reviewed indicate there are different kinds of consensus mechanisms that include proof-of-work and proof-of- stake. Proof-of-work may be used in permissionless DLT and proof-of- stake may be used in permissioned DLT. Consensus mechanisms also incorporate security aspects such as cryptography and digital signatures that are listed below: Cryptography is used to encrypt data to ensure transactions are valid and provide identity verification. For example, during asset transfers, a form of cryptography known as public key cryptography usually forms the foundation of the transaction validation process. Digital signatures are based on cryptography and are used in DLT to certify the authenticity of transactions (i.e., to show that a person is the true owner of an indicated digital identity). When a person creates and sends a DLT transaction, the transaction must also bear that person’s digital signature. According to publications we reviewed, agencies, financial institutions, and industry stakeholders have identified potential uses for DLT in the financial service industry through the clearing and settlement of financial transactions. Examples of these transactions include: private trades in the equity market; and insurance claims processing and management. DLT can also incorporate smart contracts. Smart contracts can automate different kinds of processes and operations. For example, smart contracts can facilitate the automation of complex, multiparty transactions, such as the payment of bonds and insurance coupons. According to one study, there are several versions of smart contracts composed using computer code. Transparency: According to publications we reviewed, DLT has the potential to facilitate transparency between financial institutions, regulators, and other financial market participants. DLT can increase transparency between participants by creating a shared record of activity where participants have access in real time. Changes by any participant with the necessary permission to modify the ledger are immediately reflected in all copies of the ledger. Because distributed ledgers can be designed to be broadly accessible and verifiable, the technology could enhance financial market transparency. Efficiencies: According to publications we reviewed, DLT can enhance efficiencies in securities and payment clearing and settlement times. Specifically, DLT has the potential to reduce settlement times for securities transactions by facilitating the exchange of digital assets during the same period of time as the execution of a trade. According to staff from SEC, while the financial services industry is moving toward shortening settlement cycles, DLT may offer efficiencies should it be deployed in securities clearance and settlement functions. In 2015, SEC requested comments on how blockchain technology could facilitate the role of a transfer agent and separately, in 2016, requested comments on the utility of DLT in shortening the settlement cycle for most broker-dealer securities transactions. In addition, conducting international money transfers through DLT can provide real-time settlement. Like most new technologies, DLT can pose certain risks and uncertainties, which market participants and financial regulators and agencies will need to monitor. Operational risk including security risk: According to a publication by the Board of Governors of the Federal Reserve System, operational failures include errors or delays in processing, system outages, insufficient capacity, fraud, and data loss and leakage. According to a FINRA report, given that DLT involves sharing of information over a network it poses security-related risks. The Financial Stability Oversight Council noted that market participants have limited experience working with distributed ledger systems, and it is possible that operational vulnerabilities associated with such systems may not become apparent until they are deployed at scale. According to officials from CSBS, permissionless DLT presents security risks (e.g., anti-money-laundering and Bank Secrecy Act) that can be mitigated. Publications we reviewed suggest some financial institutions have taken several approaches to adopt DLT. For example, some financial institutions have initiated blockchain projects, joined a multiparty consortium, or announced partnerships to examine DLT’s potential. In addition, the largest securities depository and a large stock exchange have used DLT. According to the World Economic Forum, 80 percent of banks are expected to initiate blockchain projects by 2017. The R3 industry consortium made up of over 50 financial institutions designed a DLT platform for recording and managing financial agreements named Corda. The Depository Trust and Clearing Corporation proposed to build a derivatives distributed ledger solution for post-trade processing. Through this initiative, the Depository Trust and Clearing Corporation seeks to reduce costs and increase efficiencies in the post-trade process. In December 2015, the stock exchange Nasdaq enabled its first trade on a blockchain using its Linq ledger through a private blockchain developer. Nasdaq Linq is a digital ledger technology that leverages a blockchain to issue and record transfers of shares of privately-held companies. Continued development of DLT is needed to understand how DLT and its components will be regulated by the existing legal and regulatory system. Additionally, it is unclear whether new regulation will need to be created because DLT can present new and unique challenges. According to the Financial Stability Oversight Council, financial regulators should monitor how a DLT network can affect regulated entities and their operations. Representatives of financial regulators have noted the importance of implementing DLT in a manner that is transparent and satisfies regulatory requirements. With respect to virtual currencies, federal and state regulators have taken varied approaches to regulation and oversight. For example, in 2015, CFTC stated it considers bitcoin and other virtual currencies to be included in the definition of “commodity” under the Commodity Exchange Act. SEC’s Office of Investor Education and Advocacy has stated that the rise of bitcoin and other virtual and digital currencies creates new concerns for investors. Two bureaus within the Department of the Treasury treat bitcoin in different ways, including the Department of the Treasury Financial Crimes and Enforcement Network (FinCEN), which determined that certain virtual currency businesses would be money transmitters under the Bank Secrecy Act, subject to regulation as money services businesses, and the Internal Revenue Service, which treats bitcoin as property for U.S. federal tax purposes. FTC can apply the Federal Trade Commission Act to combat unfair or deceptive acts or practices in or affecting commerce, which includes virtual currencies. In addition, approximately 44 states have issued licenses to companies that use virtual currency in their business model. The existing regulatory complexity for virtual currencies indicates that regulatory approaches for future applications for DLT will also be complex. According to interviews we conducted, some agencies and one industry association formed working groups to monitor and understand DLT and virtual currencies. These examples are listed below. In 2015, CFTC formed a working group on blockchain, distributed ledger technology, and virtual currencies to study their application to the derivatives market and promote understanding and communication across the agency. In 2017, the group broadened its focus to cover other aspects of fintech and changed its name to the FinTech Working Group. In 2016, the Federal Reserve established a working group that is looking at financial innovation across a broad range of responsibilities, including in payments and market infrastructures, supervision, and financial stability. In November 2013, SEC formed an internal Digital Currency Working Group to build expertise; identify emerging risk areas for potential regulatory, examination, and enforcement action; and coordinate efforts within SEC in the digital and virtual currency space. In November 2016, the group changed its name to reflect that its efforts had expanded beyond digital and virtual currencies into related distributed ledger technologies and their applications. According to SEC staff, the Distributed Ledger Technology Working Group plans to evaluate when and how distributed ledger technology will be used within the securities market. In 2016, FDIC established the FinTech wholesale working group of intra- agency experts to monitor work in the areas of DLT, blockchain, and smart contracts. In 2015, the Chamber of Digital Commerce formed an alliance to provide technical assistance and periodic informational sessions on Bitcoin, other digital currencies, and broader uses of blockchain. We provided a draft of this report for review and comment to CFPB, CFTC, CSBS, FDIC, the Federal Reserve, FINRA, FTC, NCUA, OCC, SBA, SEC, and Treasury. We incorporated technical comments we received from these agencies, as appropriate. In addition, we received written comments from NCUA and CSBS, which are summarized below and reprinted in appendixes II and III. In its written comments, NCUA acknowledged that regulators face challenges understanding the risk of the rapidly evolving financial technology industry and the challenge of balancing regulations and guidance to address those risks against stifling innovation. NCUA noted that it continues to evaluate risks and monitor the evolving market impact driven by fintech companies and to indirectly supervise activities through credit unions to the extent possible. In its written comments, CSBS noted that it had formed a task force to study fintech developments and determine the potential impact on consumer protection, state law, and banks and nonbank entities chartered or licensed by the states. CSBS also provided additional information about the state regulatory system for marketplace lending, mobile payments, and distributed ledger consumer products while noting that the states actively license and supervise companies engaged in these services. CSBS also noted that the states have work under way to improve the Nationwide Multistate Licensing System with a technological overhaul to improve compliance with state licensing requirements. We are sending copies of this report to the congressional requesters, agencies, and other interested parties. In addition, this report will be available at no charge on our website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Regulation of financial technology (fintech) firms depends on the extent to which the firms provide a regulated service and the format in which the services are provided. Table 3 explains the basic functions of federal and state regulators and agencies with oversight responsibilities related to the following subsectors: marketplace lending, mobile payments, digital wealth management, and distributed ledger technology. GAO staff who made major contributions to this report include Harry Medina (Assistant Director), Lauren Comeau (Analyst in Charge), Namita Bhatia-Sabharwal, Chloe Brown, Pamela Davidson, Janet Eackloff, Cody Goebel, Davis Judson, Silvia Porres, Tovah Rom, Jessica Sandler, and Jena Sinkfield. Accenture. The Rise of Robo-Advice: Changing the Concept of Wealth Management, 2015. Becker, Krista. Mobile Payments: The New Way to Pay? Federal Reserve Bank of Boston Emerging Payments Industry Briefing, February 2007. BlackRock. Digital Investment Advice: Robo Advisors Come of Age, September 2016. Board of Governors of the Federal Reserve System. Consumers and Mobile Financial Services 2016. March 2016. Board of Governors of the Federal Reserve System. Consumer Compliance Outlook, Fintech Special Edition, 3rd ed. Philadelphia, Pa.: 2016. Chamber of Digital Commerce, Smart Contracts Alliance. Smart Contracts: 12 Use Cases for Business & Beyond. Washington, D.C.: December 2016. Congressional Research Service. Marketplace Lending: Fintech in Consumer and Small-Business Lending. September 6, 2016. Consumer Financial Protection Bureau. Project Catalyst report: Promoting consumer-friendly innovation. Washington D.C.: October 2016. Crowe, Marianne; Susan Pandy, David Lott, and Steve Mott, Is Payment Tokenization Ready for Primetime? Perspectives from Industry Stakeholders on the Tokenization Landscape, Federal Reserve Bank of Atlanta and Federal Reserve Bank of Boston, June 11, 2015. Department of the Treasury. Opportunities and Challenges in Online Marketplace Lending. May 10, 2016. Deloitte. Digital Disruption in Wealth Management - Why Established Firms Should Pay Attention to Emerging Digital Business Models for Retail Investors, 2014. EY, Advice Goes Viral: How New Digital Investment Services Are Changing the Wealth Management Landscape, 2015. Federal Deposit Insurance Corporation. Supervisory Insights, Marketplace Lending. Winter 2015. Federal Deposit Insurance Corporation. Supervisory Insights, Mobile Payments: An Evolving Landscape. Winter 2012. Federal Deposit Insurance Corporation. 2015 FDIC National Survey of Unbanked and Underbanked Households. October 20, 2016. Federal Deposit Insurance Corporation. Opportunities for Mobile Financial Services to Engage Underserved Consumers Qualitative Research Findings. May 25, 2016. Federal Reserve Bank of Cleveland. Click, Submit: New Insights on Online Lender Applications from the Small Business Credit Survey. Cleveland, Ohio: October 12, 2016. Federal Trade Commission Staff Report. Paper, Plastic…or Mobile? An FTC Workshop on Mobile Payments. March 2013. Financial Industry Regulatory Authority. Report on Digital Investment Advice. March 2016. Financial Industry Regulatory Authority. Distributed Ledger Technology: Implications of Blockchain for the Securities Industry. January 2017. Financial Stability Oversight Council. 2016 Annual Report. Washington, D.C.: June 21, 2016. GAO. Person-to-Person Lending: New Regulatory Challenges Could Emerge as the Industry Grows, GAO-11-613. Washington, D.C.: July 7, 2011. GAO. Virtual Currencies: Emerging Regulatory, Law Enforcement, and Consumer Protection Challenges. GAO-14-496. Washington, D.C.: May 29, 2014. GAO. Financial Regulation: Complex and Fragmented Structure Could be Streamlined to Improve Effectiveness, GAO-16-175. Washington, D.C.: February 25, 2016. GAO. Data and Analytics Innovation: Emerging Opportunities and Challenges, Highlights of a Forum, GAO-16-659SP. Washington D.C.: September 2016. International Organization of Securities Commissions. IOSCO Research Report on Financial Technologies (Fintech). February 2017. McQuinn, Alan, Weining Guo, and Daniel Castro. Policy Principles for Fintech, Information Technology & Innovation Foundation, October 2016. Mills, David; Kathy Wang, Brendan Malone, Anjana Ravi, Jeff Marquardt, Clinton Chen, Anton Badev, Timothy Brezinski, Linda Fahy, Kimberley Liao, Vanessa Kargenian, Max Ellithorpe, Wendy Ng, and Maria Baird (2016). “Distributed ledger technology in payments, clearing, and settlement,” Finance and Economics Discussion Series 2016-095. Washington: Board of Governors of the Federal Reserve System. Mills, Karen Gordon, and Brayden McCarthy. “The State of Small Business Lending: Innovation and Technology and the Implications for Regulation.” Harvard Business School working paper 17-042 (2016). Office of the Comptroller of the Currency. Exploring Special Purpose National Bank Charters for Fintech Companies. Washington, D.C.: December 2016. Office of the Comptroller of the Currency. Comptroller’s Licensing manual Draft Supplement, Evaluating Charter Applications from Financial Technology Companies. Washington, D.C.: March 2017. Office of the Comptroller of the Currency. OCC Summary of Comments and Explanatory Statement: Special Purpose National Bank Charters for Financial Technology Companies. Washington, D.C.: March 2017. Pew Charitable Trusts. Who Uses Mobile Payments? Survey findings on consumer opinions, experiences. May 2016. Professor Mark E. Budnitz. Pew Charitable Trusts, The Legal Framework Of Mobile Payments: Gaps, Ambiguities, and Overlap. February 10, 2016. Qplum. What is Robo-Advising. Jersey City, NJ: May 5, 2016. Segal, Miriam. Small Business Administration Office of Advocacy. Peer- to-Peer Lending: A Financing Alternative for Small Businesses, Issue Brief Number 10. Washington, D.C.: September 9, 2015. S&P Global Market Intelligence. An Introduction to Fintech: Key Sectors and Trends. October 2016. S&P Global Market Intelligence. 2016 U.S. Digital Lending Landscape. Charlottesville, Virginia: December 2016. The Clearing House. Ensuring the Safety & Security of Payments, Faster Payments Symposium. August 4, 2015. The Conference of State Bank Supervisors and Money Transmitter Regulators Association. The State of State Money Services Businesses and Regulation and Supervision. May 2016. United Kingdom Government Office for Science. Distributed Ledger Technology: beyond block chain. December 2015. United States Postal Service, Office of the Inspector General, Blockchain Technology: Possibilities for the U.S. Postal Service, Report No. RARC- WP-16-011. May 23, 2016. World Economic Forum. The Future of Financial Infrastructure: An ambitious look at how blockchain can reshape financial services. August 2016.
Advances in technology and the widespread use of the Internet and mobile communication devices have helped fuel the growth in fintech products and services, such as small business financing, student loan refinancing, mobile wallets, virtual currencies, and platforms to connect investors and start-ups. Some fintech products and services offer the potential to expand access to financial services to individuals previously underserved by traditional financial institutions. GAO was asked to review a number of issues related to the fintech industry, including how fintech products and services are regulated. This report, the first in a series of planned reports on fintech, describes four commonly referenced subsectors of fintech and their regulatory oversight. GAO conducted background research and a literature search of publications from agencies and other knowledgeable parties. GAO also reviewed guidance, final rulemakings, initiatives, and enforcement actions from agencies. GAO interviewed representatives of federal agencies, including the federal prudential regulators, state supervision agencies, trade associations, and other knowledgeable parties. GAO is making no recommendations in this report. The financial technology (fintech) industry is generally described in terms of subsectors that have or are likely to have the greatest impact on financial services, such as credit and payments. Commonly referenced subsectors associated with fintech include marketplace lending, mobile payments, digital wealth management, and distributed ledger technology. Marketplace lenders connect consumers and small businesses seeking online and timelier access to credit with individuals and institutions seeking profitable lending opportunities. Marketplace lenders use traditional and may use less traditional data and credit algorithms to underwrite consumer loans, small business loans, lines of credit, and other loan products. Mobile payments allow consumers to use their smartphones or other mobile devices to make purchases and transfer money instead of relying on the physical use of cash, checks, or credit and debit cards. There are different ways to make mobile payments, including the use of a mobile wallet. use algorithms based on consumers' data and risk preferences to provide digital services, including investment and financial advice, directly to consumers. Digital wealth management platforms provide services including portfolio selection, asset allocation, account aggregation, and online risk assessments. Distributed ledger technology was introduced to facilitate the recording and transferring of virtual currencies, specifically using a type of distributed ledger technology, known as blockchain. Distributed ledger technology has the potential to be a secure way of conducting transfers of digital assets in a near real-time basis potentially without the need for an intermediary. Regulation of these subsectors depends on the extent to which the firms provide a regulated service and the format in which the services are provided. For example, a marketplace lender may be subject to: federal regulation and examination by the Board of Governors of the Federal Reserve System, the Federal Deposit Insurance Corporation, and the Office of the Comptroller of the Currency in connection with certain services provided to depository institutions by the lender; state licensing and regulation in the states in which the lender conducts business; securities offering registration requirements administered by the Securities and Exchange Commission if the lender publicly offers securities; and/or enforcement actions by the Bureau of Consumer Financial Protection and the Federal Trade Commission for violations of certain consumer protection laws. To learn about the fintech industry, some agencies hosted forums, formed working groups, and published whitepapers and regulatory guidance.
The Congress enacted the Endangered Species Act in 1973 to conserve threatened or endangered plant and animal species. The act requires the Service to base its determination of whether a species is endangered or threatened solely on the basis of the best available scientific and commercial data. Available data includes biological or trade data obtained from scientific or commercial publications, administrative reports, maps or other graphic materials, or experts on the subject. Using the best available data, the act requires the Service to determine whether a species should be listed as threatened or endangered by analyzing its status based on the following five factors: present or threatened destruction, modification, or curtailment of a species habitat or range; overuse for commercial, recreational, scientific, or educational inadequacy of existing regulatory mechanisms; or other natural or manmade factors affecting a species’ continued existence. As of June 2003, the Service had listed 1,263 species in the United States as threatened or endangered. This total included 517 animal species and 746 plant species. The number of species listed per year has varied considerably, as shown in figure 1. There are also 558 foreign species listed as threatened or endangered. As of June 2003, the Service was in the process of listing 36 more species and had identified 251 species as candidates for listing. The act also requires the Service to designate critical habitat for listed species. Critical habitat is a specific geographic area that is essential for the conservation of a threatened or endangered species and that may require special management and protection. As of June 2003, 417 domestic species had critical habitat designated. The number of critical habitat designations per year has varied considerably, as shown in figure 2. The Endangered Species Act has provisions to protect and recover species once they are listed. The act prohibits the “taking” of listed animal species by any party—federal or nonfederal. “Taking” or “take” means to harass, harm, pursue, hunt, shoot, wound, kill, trap, capture, or collect a listed species. Also, federal agencies must ensure that their activities, or any activities they fund, permit, or license, do not jeopardize the continued existence of a listed species or result in the destruction or adverse modification of its critical habitat. The act establishes a process for federal agencies to consult with the Service about their activities that may affect listed species. In addition, the act requires that the Service develop a recovery plan to reverse the decline of each listed species and ensure its long-term survival. A recovery plan may include a variety of methods and procedures to recover listed species, such as protective measures to prevent extinction or further decline, habitat acquisition and restoration, and other on-the-ground activities for managing and monitoring endangered and threatened species. To date, seven domestic species have been delisted due to recovery. (App. II provides additional information on the process used by the Service to protect listed species.) The Endangered Species Act requires the Service to use the best available scientific data when deciding to list species or designate critical habitat. The “best available” standard does not obligate the Service to conduct studies to obtain missing data, but it prohibits the Service from ignoring available data. The Service goes through an extensive series of procedural steps that involves public participation and review by outside experts to help ensure that it collects relevant data and uses it appropriately. Although the process alone is not sufficient to ensure the accuracy of the Service’s listing and critical habitat decisions, it generally ensures that the Service is using and considering the “best available” data. The Service follows a rigorous process in listing a species as endangered or threatened, designating critical habitat, or removing a species from the endangered and threatened list. The Service’s process includes following a rulemaking procedure, established by the Endangered Species Act, supported by additional procedures under Service regulations and guidance. The complete text of the proposed and final rules and related information (including a summary of data on which the proposal is based and a summary of comments received on the proposal) are published in the Federal Register, the government’s official publication for making public the regulations and legal notices issued by federal agencies. The act and regulations require the Service to provide an opportunity for public participation in the rulemaking process, notify affected states and local jurisdictions and invite comments from them and other interested parties, notify newspapers and professional journals, and hold at least one public hearing, if requested, within 45 days of publishing the proposal. Additionally, Service procedures provide for listing and critical habitat decisions to be reviewed internally to help ensure that the professional judgment that the Service’s scientists exercise when weighing and interpreting the collected data is sound and conforms to contemporary scientific theories and principles. The process to list a species begins either through a petition from an individual, group, or state agency or through the initiative of the Service (see fig. 3). When a petition is filed to list a species, the Service provides a copy of the petition to, and requests information from, appropriate state agencies and affected tribal governments. The Service uses the information that it receives from these parties (or that which is contained in the petition or otherwise readily available) to make its initial determination as to whether a species may be threatened or endangered, and if so, to proceed with data gathering and analysis. The act requires the Service to make this determination generally within 90 days of receiving the petition. If the Service determines that it should proceed, it conducts a “status review”—a review of all the available information on a species—to determine whether the species warrants protection under the act. To conduct the status review, the Service solicits comments and requests information from the general public (by publishing a notice in the Federal Register) and contacts affected local, state, tribal and federal agencies; interested conservation or industry groups; and scientific organizations or professionals interested in and/or knowledgeable about the species. The Service may also fund field surveys, museum research, and literature searches in order to compile available information. Service scientists who conduct status reviews told us that they often work closely with experts from other government agencies, academia, and elsewhere to help gather and interpret information. In some instances, the Service initiates a review of a species without a petition, for which it conducts a candidate assessment—similar to a status review—to identify available information. Within 12 months of receiving a petition for which the Service proceeded with a status review, the Service must determine whether the species’ listing is warranted. If a Service field office makes an initial determination that the listing is warranted, it prepares a proposed rule for publication in the Federal Register. Before the proposed rule is published, a draft receives considerable internal review by officials in the Service’s field, regional, and headquarters offices. The review by officials in the field and regional offices helps ensure the exercise of sound professional judgment. The field office that is responsible for the listing provides the appropriate regional office with the draft of the proposed rule and all supporting scientific information. Officials in the regional office review the proposed rule to ensure that scientific information supports the proposed rule. Regions are responsible for ensuring that the proposed rule is scientifically accurate and biologically and legally sound. Regional officials told us that the review is an opportunity for the region to identify information gaps and issues concerning how the information supports the conclusions. At the Service’s headquarters, the draft proposed rule is reviewed to ensure that it is consistent with other listing rules and complies with national policies. Either the Director of the Fish and Wildlife Service or the Assistant Secretary for Fish and Wildlife and Parks approves all proposed rules before publication. Upon publication, the public has at least 60 days to provide comments on a proposed rule. The Service may extend the public comment period and/or reopen it at a later date. Service officials told us that the public comment period is an opportunity to reach biologists, scientists, academicians, and advocacy groups that the Service may not have contacted previously. The Service also holds public hearings, if requested. At the end of the public comment period, the Service reevaluates all the data, including the comments received since the proposal was published, to determine whether the listing is still warranted. If not, the proposal will be withdrawn. The Service must publish its final decision within 12 months of its proposal. In cases when experts disagree on the accuracy or sufficiency of the available data concerning the proposed listing, or the release of additional information that may affect the outcome of the petition is expected, the proposal may be extended 6 months beyond the normal 12- month time frame. In the event that the listing is warranted, the Service prepares a final rule, incorporating appropriate changes based on the information received during the comment period. Final rules are subject to the same internal review process as proposed rules and are approved by either the Director of the Fish and Wildlife Service or Assistant Secretary for Fish and Wildlife and Parks before being published. The procedures for designating critical habitat are similar to those for listing a species. However, in designating critical habitat, the Service must also take into consideration the economic and other impacts of specifying any particular area as critical habitat. The Assistant Secretary of Fish and Wildlife and Parks approves critical habitat designations. Officials at all levels of the agency demonstrated familiarity with the requirements of the review process and stated that they believe it provides the general guidelines necessary to ensure the best available data are identified and properly interpreted. Field office officials noted that proposed and final rules are challenged internally to ensure they can withstand public scrutiny and that while rulemakings are initiated at the field level, extensive review ensures that the entire agency is on board before anything is finalized. Scientists and other agency personnel told us that they use the process to test the validity of their listing and critical habitat decisions. Some officials emphasized the crucial role that the experience and expertise of the Service’s scientists play in ensuring that listing and critical habitat decisions are based on the best available science. Peer review is considered to be the most reliable tool to ensure that quality science will prevail over social, economic, and political considerations in the development of a particular product or decision. Peer review—a routine component of science—can substantially enhance the quality and credibility of the scientific or technical basis for a decision. For regulatory decisions, peer review can provide for independent and expert analysis to complement the adversarial and political nature of rulemaking. While many federal agencies were already using peer review, the Office of Management and Budget (OMB) issued guidance in 2002 recommending that federal agencies utilize formal, independent external peer review (peer review by individuals outside of the agency) to ensure the quality of data and analytic results disseminated to the public. It also recommended that peer reviewers be selected primarily on the basis of their technical expertise, that they disclose any source of bias (either prior technical or policy positions or sources of personal and institutional funding from which they may benefit), and that peer review be conducted in an open and rigorous manner. Federal agencies have adopted a variety of peer-review practices, depending on the nature of the product or decision under review. As we reported in 1999, peer-review practices at federal agencies vary according to their intended use and form. According to OMB’s 2002 guidance, agencies should tailor the rigor and intensity of peer review in accordance with the significance of risk or management implications of the information involved. The form of peer review can range from informal consultations with agency colleagues not involved in the earlier stages of the project to formal external advisory panels, which can span several years and cost thousands of dollars. In addition, for each different form of peer review, there are multiple variations—the amount of time allocated for the review, the number of reviewers, and whether the review occurs internally or externally—all of which affect the overall time and cost required to conduct a review. In addition to its internal decision-making processes, the Service uses external peer review of listing and critical habitat decisions to ensure that the best biological and commercial information is being considered. The Service’s peer-review policy requires officials to solicit the opinions of three appropriate and independent experts regarding scientific data and assumptions supporting listing and critical habitat decisions. Peer reviewers are selected at the discretion of the field office scientists responsible for developing listing and critical habitat decisions. The reviewers, who may come from the academic and scientific community, tribal and other Native American groups, federal and state agencies, and/or the private sector, are selected on the basis of their independence and expertise on the species being considered, similar species, the species’ habitat, or other relevant subject matter. The Service’s scientists may ask peer reviewers to critique specific aspects of the proposed rule, such as the Service’s interpretation of a particular study, or they may ask reviewers to comment on the rule in its entirety. The Service’s peer-review policy generally appears to be appropriate for the circumstances in which it is used. Although other agencies may use more rigorous forms of peer review, such as convening a peer-review panel or a science advisory board, the Service’s peer-review process allows the Service to make listing and critical habitat decisions under relatively short time frames (the Service usually asks peer reviewers to perform their review during the public comment period—normally 60 days—while a peer-review panel may span several months or years). However, to help ensure the identification of complete and current information on a species and its habitat, the Service may contact experts during the status review. In addition, any decisions that are issued as “final” rules can later be reconsidered as circumstances warrant or new information becomes available. In fact, a species can be delisted if new information surfaces indicating that the original decision to list was not warranted. One limitation that the Service faces in getting an independent review is the scarcity of experts on a particular species. For example, in some instances, the most qualified experts to peer review a decision may have authored some of the studies that the Service used to support its decision, forcing the Service to balance expertise with independence. However, according to a National Academy of Sciences report that reviewed the Environmental Protection Agency’s use of peer review for similar actions, to choose an individual to peer review who is both an expert and independent might be impossible, or might not promote the best possible review. In such cases, an appropriate balance of views may be sought to ensure that different interpretations on the scientific and technical merit of a decision are taken into consideration. Such cases should, however, be fully disclosed. Other organizations have developed procedures for assessing the independence of peer reviewers, ranging from simply requiring peer reviewers to disclose any potential bias, to using third parties to identify peer reviewers based, in part, on their independence. Service officials told us that they have not adopted a formal procedure to assess peer reviewers’ independence, and the Service does not publicly disclose in the Federal Register potential conflicts or prior involvement by its peer reviewers when the Service publishes the final rule. The Service generally complied with its peer-review policy of soliciting peer review from at least three reviewers during fiscal years 1999 through 2002. During this time, the Service solicited three or more peer reviewers in 94 out of the 100 listing and critical habitat decisions it made. In three instances the Service solicited fewer than three peer reviewers, and in three other instances documentation was unavailable to indicate how many reviewers were asked. (See app. III for a complete list of the decisions with the number of peer reviewers solicited, the number that responded, and how they responded.) While the Service generally complied with its policy to seek peer reviewers, reviewers often did not respond. As shown in figure 4, the Service received responses from three or more peer reviewers in 38 decisions for which it solicited at least three peer reviewers. It received either one or two responses in 41 decisions, and no responses in 15 decisions. Field office scientists, as well as an expert on peer review, reported a variety of reasons for the limited number of responses, including (1) the potential peer reviewers had busy schedules and felt constrained by the short time frames allotted to conduct the review, and (2) the potential reviewers were unwilling to conduct peer review either because they did not want to become involved in a controversial decision or because they did not want to work without compensation. In addition, the field office scientists reported that potential peer reviewers may not be inclined to conduct peer review because they found nothing to criticize or had already provided comments at an earlier stage of the decision, such as during the status review. Recognizing the importance of peer review, some regional and field offices have taken steps to increase the number of respondents. For example, some field offices contact potential peer reviewers in advance, rather than initiating contact just before the decision is open for peer review; others maintain communication with the peer reviewers throughout the process. For example, the Pacific Islands field office in Honolulu, Hawaii, has assigned an administrative staff person to initiate phone calls and E-mails to help remind and encourage peer reviewers to respond. This staff person also monitors the implementation of the peer-review policy and tracks results. In order to increase the likelihood that at least three peer reviewers respond to a request, some field offices request peer reviews from more than three individuals. Field office scientists suggested other ways to increase the response rate, such as providing monetary compensation, using a third party to select and coordinate peer review, narrowing the scope of the review, and providing more time for review. External reviews of listing and critical habitat decisions indicate that most decisions are generally scientifically supported, but concerns about the adequacy of critical habitat determinations remain. Listing decisions are often characterized as straightforward, requiring the Service to answer only a “yes or no” question as to whether a species warrants inclusion on the threatened or endangered list. Critical habitat designations, on the other hand, are more complex and often require further information on the species’ habitat requirements and other management considerations. Peer reviewers often expressed concerns about the specific areas designated as critical habitat, while other experts expressed concerns about the adequacy of the information available to make the designation. Experts and others have found most of the Service’s listing decisions to be scientifically supported. Experts knowledgeable about the Endangered Species Act and recent studies assessing the Service’s use of science in making listing decisions concur that the Service’s listing decisions are generally supported. Similarly, experts not affiliated with the Service have peer-reviewed proposals to list species and overwhelmingly supported the Service’s decisions. The courts have overturned few listing decisions on the basis of inadequate science, and the Service has delisted few species on the basis of new information that suggested that protection under the act was not originally warranted. Experts, Service officials, and others knowledgeable about the Endangered Species Act largely agree that most listing decisions have been relatively straightforward and scientifically supported. Experts and others we spoke to generally agreed that most listed species probably deserved being listed under the current standard for best available scientific information. For example, several attorneys, who represent the regulated community in challenges to the Service’s decisions, stated that, given the Service’s short time frames and limited resources, the science used to support most listing decisions did not present a significant problem. However, these attorneys and others contend that the “best available data” standard does not provide enough certainty that a species is threatened or endangered and suggest that a more stringent standard should be developed. On the other hand, interested parties representing a diverse set of interests raised concerns that Service officials at the Headquarters level are succumbing to political pressures to not list species despite support from regional and field scientists who believe evidence shows that listing is warranted. Service scientists told us they believe many listed species have low populations and/or face clearly identified threats, indicating that the species are at risk. They said that many listing decisions have been made to protect species native to a specific area, with a narrow range, or for which substantial scientific information was already available or easy to collect. On the other hand, the scientists noted that collecting information becomes more difficult and costly when a wide-ranging species may be at risk. Additionally, several scientific disagreements regarding listing decisions have surfaced in recent years, mostly concerning whether the amount of information available at the time a decision is made suffices as a basis for a decision. (See app. IV for information on the nature of scientific controversy surrounding the Service’s decisions to list species.) Finally, many of the experts we spoke with had concerns about the science used to support other aspects of the act, such as recovery actions or consultations with federal entities on proposed actions that could potentially harm a listed species. Several studies have supported the Service’s use of science in making listing decisions. The Ecological Society of America—a professional society of ecologists representing ecological researchers in more than 60 countries—released a study on the use of science in achieving the goals of the act that concluded that the major problem with the listing process has been its slowness rather than the quality of the listing decisions. The National Research Council (NRC) reached similar conclusions in a 1995 report, finding that many of the conflicts and disagreements over the Endangered Species Act do not appear to be based on scientific issues. More recently, in 2002, NRC reviewed the genetic evidence used to support one particular listing decision, the listing of the Gulf of Maine Atlantic salmon distinct population segment. It concluded that Maine salmon are genetically distinct from other salmon, supporting the Service’s decision to list the species. The Service received 143 peer-review responses for 54 of the 63 listing decisions finalized between fiscal years 1999 and 2002 and no responses for the remaining 9 decisions (see app. III). In 48 of these decisions, reviewers providing comments unanimously agreed with the Service’s scientific conclusions or otherwise indicated support for the decision to list the species. In two decisions, the Service reported that one of the peer reviewer’s opinions was “neutral,” and the rest of the opinions were supportive. In two other decisions, we were unable to determine the nature of one of the peer reviewer’s response. Peer reviewers disagreed with the Service in the following two decisions: Alabama sturgeon. One of the five reviewers to provide comments on the proposal to list the Alabama sturgeon, a freshwater fish historically found throughout the Mobile River basin of Alabama and Mississippi, disagreed with the Service’s proposed listing determination. While the reviewer did not directly respond to the Service’s request for peer review, he did provide comments at one of the public hearings regarding the proposed rule. The reviewer argued that the Alabama sturgeon was not a valid species given the fish’s morphological (i.e., physical appearance such as color pattern, shape, and scale patterns) and genetic evidence. The other four reviewers responding to the proposed rule supported the validity of the Alabama sturgeon as a species. Desert yellowhead. One of two reviewers who provided comments on the proposed rule to list the desert yellowhead (a flowering plant that occurs in Wyoming) agreed that the species was rare and in need of protection, but did not agree that listing the species under the act was the appropriate mechanism. The other reviewer supported listing the plant. The Service’s actions and inactions under the act are frequently challenged in the courts. In hearing such challenges, courts must defer to agencies in judging actions, such as listing decisions, and must not substitute their judgment for an agency’s, especially on technical matters. As a result, courts will uphold an agency decision when it is evident that the agency considered the relevant facts and articulated a rational connection between those facts and its decision. Partly because of the deference granted to the Service in making listing determinations, most litigation has not directly challenged the Service’s use of science. Instead, according to an official from the Department of the Interior’s Office of the Solicitor, most litigation revolves around definitional or procedural issues, such as the Service’s failure to meet statutory time frames. The official said that litigants often challenge decisions on nonscientific aspects of the act because they feel this provides them with a stronger case. Thus, the fact that the courts have rarely ruled against the Service on the basis of inadequate science is not necessarily an affirmation that the Service used the best available science. Based on a review of federal court cases decided during fiscal years 1999 through 2002, we identified 17 cases in which a court issued an opinion related to the Service’s listing decisions. The Service lost 11 of these cases, mostly because it failed to take certain actions regarding decisions to list or not to list a species within the time allotted by the act. However, the courts overturned listing decisions on the basis of issues related to the use of scientific data in the following two cases: Sacramento splittail. In 2000, a federal court ruled that the decision to list the Sacramento splittail was not supported by the best scientific data available. The splittail is a large fish with a distinctive tail and is native to California’s Central Valley. Regional water authorities challenged the listing of the splittail on scientific grounds, asserting, among other things, that the Service ignored an important study indicating resiliency and an increasing abundance of the splittail. The court rejected the Service’s arguments that these data were not submitted in time to be considered and were irrelevant, and found there to be no indication that the Service considered substantial evidence that suggested that the splittail should not be listed. The court thus concluded that the Service had failed to consider all available data. The Service is in the process of reevaluating this listing rule. Westslope cutthroat trout. In 2002, a federal court ruled that the Service’s decision not to list the Westslope cutthroat trout was not supported by the best scientific data available. The Westslope trout is one of 14 subspecies of cutthroat trout native to streams in the western United States. In its decision not to list the trout, the Service identified hybridization (the breeding with other species of trout) as one of the threats to the species, but included these hybrid fish in the population considered for listing. The court noted that if hybridization were a “threat” to the species, it would seem logical that hybrid fish should not be included in the population under consideration. After explaining that the identification of the existing population of the trout was vital to the ultimate listing determination, the court found that the record failed to offer a rationale for including hybrid stocks in the population that it considered for listing, and concluded that the Service had ignored existing scientific data for assessing the degree of hybridization that may be appropriate to include in the population. The court remanded the case to the Service for reconsideration. The Service lost the following two cases because it failed to assess whether the species was imperiled throughout “a significant portion of its range.” Flat-tailed horned lizard. In 2001, an environmental group successfully challenged the Service’s decision not to list the flat-tailed horned lizard, a small lizard found in desert lands in the southwestern United States. In reaching its decision, the Service concluded that regardless of the threats to the lizard on private lands, large populations of the lizard and areas of its habitat were already protected under a conservation agreement on public lands and that the species was sufficiently protected from further threats. The court found that the Service should have performed an analysis to determine whether the private lands constituted “a significant portion of range” and, if so, whether the lizard was or would become extinct in that area. The court remanded the case to the Service for those determinations. Queen Charlotte goshawk. In 2002, an environmental group successfully challenged the Service’s decision not to list the Queen Charlotte goshawk, a forest-dwelling bird of prey found throughout North America. In reaching its decision, the Service considered the goshawk’s presence in southeast Alaska, the Queen Charlotte Islands, and Vancouver Island in Canada. The Service found that the goshawk was not threatened or endangered in southeast Alaska or the Queen Charlotte Islands, but the Service did not make a determination regarding the goshawk’s status on Vancouver Island. The Service contended that the goshawk’s status on Vancouver Island did not matter because that area did not represent a significant portion of the goshawk’s range. The decision in this case upheld the Service’s determination regarding southeast Alaska and the Queen Charlotte Islands, finding that the Service had properly used the best available science. However, the decision went on to conclude that Vancouver Island represented a significant portion of the goshawk’s range and that the case should be remanded to the Service to determine whether the goshawk was threatened or endangered on Vancouver Island. In addition to removing recovered or extinct species from the list of threatened or endangered species, the Service can also delist a species if new information becomes available to show that protection under the act is not warranted. Typically, listing a species generates widespread attention to the species, additional funding for its study, and further research relating to the species or its habitat. As additional information is gathered, the Service or interested parties can initiate a delisting action if they believe the species no longer qualifies for listing. The Service follows similar rulemaking procedures to delist a species as for listing. Since the inception of the Endangered Species Act, the Service has delisted few species. As of March 2003, the Service had delisted 25 threatened and endangered domestic species of the more than 1,200 listed. Of the 25 delistings, 10 resulted from new information—4 because new information showed the species to be more widespread or abundant than believed at the time the species was listed, and 6 for taxonomic revisions, meaning that the species was found not to be unique, but was a hybrid or simply a population of another common species making it ineligible for listing (see table 1). The remaining 15 delistings resulted from recovery efforts (7), extinction (7), or an amendment to the act that made the species no longer qualify for listing protection (1). While external reviews indicate that the Service bases most critical habitat decisions on the best available science, concerns remain over the adequacy of the information available to support the decisions. Experts and others we spoke to explained that the amount of scientific information available on a species’ habitat needs often may be limited, affecting the Service’s ability to adequately define the habitat area required. Experts that peer reviewed proposed critical habitat designations generally supported the Service’s decisions, though many provided additional clarifications or suggestions. While the courts have overturned few critical habitat decisions on the basis of inadequate science, scientific disagreements over these decisions continue. Experts and others knowledgeable about the Endangered Species Act have expressed concerns about the Service’s ability to designate critical habitat for some listed species given the amount of information available on the species’ habitat needs. Unlike listing decisions, which are more straightforward—requiring the Service to answer only a “yes or no” question as to whether a species warrants listing—critical habitat decisions often require more detailed knowledge about a species’ life history and habitat needs and call for the Service to factor in the species’ special management needs in addition to the economic impacts of the designation. Service officials, experts, and others we spoke to agreed that the amount of scientific information available is limited and often affects the Service’s ability to adequately define the habitat essential to the species’ conservation. While some interested parties stated that the Service designated areas too broadly and included lands unsuitable for several species, others said that the Service did not designate enough habitat for some listed species. According to Service officials, the resource and time constraints under which the Service’s scientists work often preclude them from collecting new information and, as a result, the information available may limit their ability to produce adequate critical habitat designations for some species. We found that most scientific disagreements surrounding recent critical habitat designations concerned whether the area chosen as critical habitat is sufficiently defined or whether the overall information used to support the designation is adequate. (See app. IV for information on the nature of scientific controversy surrounding the Service’s decisions to designate critical habitat for listed species.) In order to increase the amount of information available on which to base critical habitat designations, the Service and others, including the National Research Council, have recommended delaying designations until recovery plans are developed. The Service received 69 peer-review responses for 27 of the 37 critical habitat decisions finalized during fiscal years 1999 through 2002; it received no responses for 10 decisions (see app. III). Reviewers providing comments in 17 of these decisions unanimously agreed with the Service’s scientific conclusions or otherwise indicated support for the critical habitat designation. In six decisions, while not stating explicit agreement with the Service’s use of science, the reviewer did not identify any major inadequacies or reasons for substantially modifying the proposed habitat. In another decision, the Service reported that five peer reviewers supported the decision and one was “neutral.” One or more peer reviewers disagreed with the Service’s proposed critical habitat designations for the remaining three decisions: Zapata bladderpod. The one reviewer responding to the proposed critical habitat designation of the Zapata bladderpod, a flowering plant that grows in Texas, stated that the areas selected on state and private lands were too small to support viable populations or the area was not always suitable habitat for the species. The reviewer also said it was premature to select those sites given the lack of information about the species. Cactus ferruginous pygmy-owl. The one reviewer responding to the proposed critical habitat designation for the cactus ferruginous pygmy- owl, a small bird found in the southwestern United States, disagreed with the designation on the grounds that there were too many unknowns about the species’ habitat requirements to support a determination about its critical habitat. Newcomb’s snail. Two of the six reviewers responding to the Service’s proposed critical habitat determination for the Newcomb’s snail (found only on the island of Kauai, Hawaii) disagreed with the proposed rule— the other four supported it. One of the reviewers who disagreed stated that there was inadequate information to make a determination because habitat requirements for the snail were limited to generalized observations in the field and thus were speculative. The reviewer said the designation did not identify the habitat features essential to the conservation of the species and was premature until additional biological information was obtained. Similarly, the other reviewer objecting to the determination did so largely because of his understanding that the process was based on few published scientific studies, and much was still unknown about the species’ habitat requirements. Even though peer reviewers may have concurred with the Service’s critical habitat designation, many provided clarifications or suggested modifications. We analyzed the peer reviewers’ responses for 16 of the 27 critical habitat decisions the Service made. There were 35 peer-review responses to these 16 decisions. Nearly all of the reviewers provided specific clarifications on information contained in the rule or suggestions for altering the habitat area selected. For instance, in many of the responses, the reviewer agreed with the proposal in general, but stated that additional lands should be included in the critical habitat designation and cited scientific reasons for increasing habitat areas. In one decision, a reviewer generally supporting the proposed critical habitat of the arroyo toad (an endangered toad found in coastal and desert drainages in California) identified specific areas where he believed the toad ranged more widely and would therefore warrant additional critical habitat. Another reviewer, generally supporting the proposed critical habitat for the Great Lakes population of the piping plover (a small shorebird that occurs across North America), identified sites she believed should be added to the designation and areas she believed to be unsuitable for the species and therefore should be excluded from the designation. As with listing decisions, and due in part to the deference the courts grant to the Service, most litigation has not directly challenged the Service’s use of science in making critical habitat determinations. Based on a review of federal court cases decided during fiscal years 1999 through 2002, we identified 11 cases in which a court issued an opinion regarding the Service’s critical habitat decisions. Most of these 11 cases dealt with nonscience issues, such as the Service’s failure to designate critical habitat for a listed species. However, the courts overturned critical habitat decisions on the basis of issues related to the use of scientific data in the following two cases: Rio Grande silvery minnow. In 2000, a federal court invalidated the critical habitat of the Rio Grande silvery minnow based in part on scientific grounds. Multiple groups, including the state of New Mexico, challenged the designation of critical habitat for the silvery minnow, a fish found exclusively in the Rio Grande River in the Southwest. The critical habitat designation for this fish consisted of a 163-mile stretch of the main stem of the Rio Grande River. The court ruled in favor of the plaintiffs because it found that the Service’s final rule had failed to (1) define with sufficient specificity what biological and physical features were essential to the species’ survival and recovery and (2) indicate where in each reach of the river such features existed. For example, the court said that the Service’s statement in the rule regarding the minnow’s need for “sufficient flowing water” provided vague generalities that stated little more than what is required for any fish species. As a result of this court ruling, the Service is in the process of redesignating critical habitat for this species. Cactus ferruginous pygmy-owl. In 2001, a court struck down the critical habitat designation for the cactus ferruginous pygmy-owl because, among other reasons, the designation was not supported by the best available scientific data. The final critical habitat for the pygmy-owl, a small bird found in the southwestern United States, consisted of over 700,000 acres of riparian and upland habitat in Arizona. The court noted that the determination of critical habitat is to be made on the basis of the “best scientific data available” and that this involves identifying geographic areas “essential to the conservation of the species.” The court then pointed out that systematic owl surveys had not yet been completed over the entire potential habitat in Arizona, and that the Service determined critical habitat by designating areas where the pygmy-owls had been sighted, areas that it thought would be consistent with the species’ known habitat, and all the land in between. The court also pointed out that, in addition to the areas actually occupied by the pygmy-owls, the Service had included areas where it thought they could live. The court appeared to conclude that, in order to include areas that were not presently occupied, the Service should have determined that such areas were in fact essential to the conservation of the species. Although the Service had already agreed to reconsider the economic analysis used in the critical habitat designation, the court concluded that a “broader reconsideration” of the critical habitat designation was necessary. The Service is in the process of redesignating critical habitat for the pygmy-owl. The Service’s critical habitat program currently faces a serious crisis that extends well beyond the use of science in making decisions. Litigation now dominates the program, leading the Assistant Secretary for Fish and Wildlife and Parks in the Department of the Interior to recently declare that the system for designating critical habitat is “broken” because it provides little conservation benefit while consuming significant resources. A key court case in 1997 invalidated the Service’s position on when critical habitat should be designated. The Endangered Species Act generally requires the Service to designate critical habitat for listed species unless the Service determines it is “not prudent,” and the Service’s regulations spell out that it is not prudent to designate critical habitat if doing so would not be “beneficial to the species.” As a result, prior to 1997, the Service had designated critical habitat for only 113 of the 1,023 domestic species that it had listed. The Service reasoned that designating critical habitat did not benefit the species because the benefits that critical habitat provided duplicated those benefits provided by listing the species. The 1997 court case invalidated the Service’s reasoning, ruling that the Service’s determination that it was not prudent to designate critical habitat for the coastal California gnatcatcher, a songbird unique to coastal southern California, was not justified. One of the reasons that the Service concluded that it was not prudent to designate critical habitat was because it believed that such a designation would not appreciably benefit the species because most populations of gnatcatchers were found on private lands to which the act’s critical habitat protections would not apply. The court found that this reasoning improperly expanded what Congress had intended to be a narrow exception to designating critical habitat. The court concluded that the Service had disregarded “the clear congressional intent that the imprudence exception be a rare exception.” Since then, court orders and settlement agreements have compelled the Service to designate critical habitat for species for which it had previously determined that it was not prudent to do so. Subsequently, a 2001 court case led the Service to reconsider some of its critical habitat designations. The case involved the requirement of the act that the Service consider the economic impact of designating a particular area as critical habitat. According to the act, the Service may exclude areas from critical habitat if it determines that the benefits of excluding the area outweigh the benefits of including the area as critical habitat unless excluding it would result in the extinction of the species. For example, in 1997, the Service designated critical habitat for the southwestern willow flycatcher, a small bird that nests in riparian areas in the southwestern United States. Because the Service believed that designating critical habitat would not result in additional restrictions on activities beyond those resulting from listing the species, it reasoned that there would be no significant economic impact associated with designating critical habitat for the flycatcher. However, the court disagreed. It found that since the act clearly barred the Service from considering economic impacts in listing decisions, but required they be considered in critical habitat decisions, the Service was not free to ignore the economic impacts of listing a species when designating critical habitat for that species. The court held that the Service had to consider all of the economic impacts of a critical habitat determination, regardless of whether those impacts were also attributable to listing or other causes. Since this decision was issued, court orders and settlement agreements have prompted the Service to re-issue some critical habitat decisions to comply with this standard. Since these two court rulings, the Service’s critical habitat program has become dominated by litigation. Each critical habitat designation made since 1997 has resulted from a court order or a settlement agreement, and the Service expects that it will have to dedicate significant resources through fiscal year 2008 to comply with existing court orders and settlement agreements. The department believes that this flood of litigation over critical habitat designation is preventing the Service from taking what it deems to be higher priority activities, such as addressing the approximately 250 “candidate” species waiting to go through the listing process (listing and critical habitat activities are funded under the same line item in the department’s budget). Service officials noted that there are other court decisions that may cause additional problems for the program in the future. The Service has been aware of problems with its critical habitat program for a number of years. The Service noted significant problems with its critical habitat program in 1997, and in 1999 it issued a Federal Register notice announcing that its system for designating critical habitat was not working and soliciting comments on its intention to develop policy or guidance and/or to revise regulations or seek legislative corrections to clarify the role of critical habitat in conserving endangered species. In particular, the Service stated its intention to consider when critical habitat designation would provide additional protection beyond that provided by listing. The Service also announced its intention to streamline the process for designating critical habitat to be more cost-effective and in line with the amount of conservation benefit provided to the species. In particular, the Service declared that it needs to develop a much less labor-intensive process for describing the areas proposed for designation as critical habitat. The Service also stated that it can streamline and make more cost- effective the process to conduct the economic analyses required to designate critical habitat and that it can more efficiently conduct the analyses required under the National Environmental Policy Act. The Service also noted that critical habitat litigation and related court orders were consuming much of the resources devoted to listing and critical habitat, and delaying other activities that it considered higher priority, such as addressing petitions submitted by citizens, working with landowners on conservation projects, and completing final actions to list species. However, no additional guidance or revisions were issued, and the Service continues to follow the same unworkable system. The Department of the Interior recently echoed concerns with the Service’s critical habitat program and the limited conservation benefit it provides to species. In April 2003, the Assistant Secretary for Fish and Wildlife and Parks testified before Congress on the critical habitat program, stating that it is “broken” and in “chaos.” He noted that litigation support is consuming valuable resources and that complying with court orders and settlement agreements has sharply reduced the Service’s ability to prioritize its listing and critical habitat actions. Service scientists working in field offices expressed similar concerns to us about the critical habitat program, raising questions about the purpose of critical habitat and the designation process. An attorney in the Solicitor’s office told us that guidance would improve the Service’s critical habitat decisions and make the decisions more defensible in court in the future. Despite the long-standing concerns over the role and implementation of the critical habitat program, the Service has done little to resolve them. In a report issued in June 2002, we recognized the impact that litigation was having on the critical habitat program and recommended that the Service expedite its efforts to develop guidance on designating critical habitat for listed species to help reduce the influence of future litigation. Better guidance would help reduce the number of legal challenges to the Service’s critical habitat designations and allow the Service to better withstand legal challenges when they arise. While the Service agreed with our recommendation, it responded that work on critical habitat guidance had been delayed pending Service efforts to complete higher priority tasks, including court orders to complete listing and critical habitat decisions and did not commit to a schedule for issuing the guidance. An official with Interior’s Solicitor’s office told us that one factor limiting the agency’s ability to complete these tasks is the Service’s inability to devote significant listing and critical habitat resources to policy initiatives without risking contempt of court because such action would force the agency to divert resources away from activities required to comply with court orders. The Service’s critical habitat program faces a serious crisis because of extensive litigation that is consuming significant program resources. The Service has recognized this crisis for many years but has done little to address it. Accordingly, in June 2002, we recommended that the Service expedite its efforts to develop guidance on designating critical habitat to reduce the influence of future litigation. While the Service agreed with our recommendation, it has done little to develop this guidance. Service officials complain that they are locked in a vicious cycle, precluded from developing the guidance for fear of being held in contempt of court for diverting resources away from activities already required by existing court orders. While the Service clearly faces a dilemma, it is imperative that it clarify the role of critical habitat and develop guidance for how and when it should be designated, and seek regulatory and/or legislative changes that may be necessary to provide threatened and endangered species with the greatest conservation benefit in the most cost-effective manner. Because the Service’s critical habitat program faces serious challenges, we recommend that the Secretary of the Interior require the Service to provide clear strategic direction for the critical habitat program, within a specified time frame, by clarifying the role of critical habitat and how and when it should be designated, and recommending policy/guidance, regulatory, and/or legislative changes necessary to provide the greatest conservation benefit to threatened and endangered species in the most cost-effective manner. We provided the Department of the Interior with a draft of this report. The department did not provide comments on the draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-3841. Key contributors to this report are listed in appendix V. This report assesses the U.S. Fish and Wildlife Service’s consideration and use of science in its decisions to list species as threatened or endangered and to designate critical habitat. Specifically, we were asked to review the extent to which (1) the Service’s policies and practices ensure that listing and critical habitat decisions are based on the best available science and (2) outside reviewers have supported the scientific data and conclusions that the Service uses to make listing and critical habitat decisions. In no instance did we attempt to evaluate scientific data and render an opinion. For this evaluation, we define “science” as the collection and interpretation of biological information, such as the identification of a species and its habitat needs. This definition does not include the legal policies and definitions found in the law or used to implement or interpret the Endangered Species Act. However, we acknowledge that there is not always a clear distinction between the interpretation of biological information and the policies and definitions used to interpret the act. In meeting our first objective, we examined the Service’s decision-making process to determine the extent to which it would likely lead to decisions based on the best available science. We reviewed the Service’s policies and procedures related to how it makes these decisions and discussed the process and procedures with key officials at the Service’s headquarters and with staff in the Service’s regional and field offices to determine their knowledge of the process and how they implemented it. We also spoke with peer-review experts and examined the literature on the processes that organizations use to peer review their decisions and products to assess the reasonableness of the Service’s policy to peer review proposed listing and critical habitat decisions. In meeting both objectives, we obtained from the Service a list of the decisions to list species and designate critical habitat that the Service finalized during fiscal years 1999 through 2002. To verify the completeness of the provided list of decisions, we compared it with a second independent database maintained by the Service. We identified one decision that was not on the original list provided to us by the Service. We included that decision in our analysis. Based on this information, we identified 101 final decisions to list or designate critical habitat that were published in the Federal Register during fiscal years 1999 through 2002. We examined the Federal Register notices for the 101 decisions to determine (1) the extent to which the Service complied with its peer-review policy to request at least three peer reviewers to review each decision, (2) the number that reviewed each decision, and (3) whether or not the reviewer(s) supported the decision. In 61 of the 101 decisions, we extracted this information from the Federal Register. For the remaining 40 decisions, we contacted the 18 field offices responsible for the decisions and requested that they provide the peer-review documentation, including copies of the peer reviewers’ responses. The Service provided us with the missing information in all but seven instances; in five of those instances partial information was available. To assess the accuracy of the information reported in the Federal Register notices, we requested that the Service provide documentation for the peer- review information, including peer reviewers’ responses, for 8 of the 61 decisions for which complete information was available in the Federal Register notice. We selected these 8 decisions in the following way. In order to minimize the burden on the Service’s field staff, we limited our universe to the decisions that were the responsibility of the18 field offices that we already intended to contact. These offices were responsible for 48 of the 61 decisions for which there was complete information in the Federal Register notice. We then randomly chose 1 decision from each of the three offices with the most decisions. Collectively these offices were responsible for 25 of the 48 decisions. We also randomly chose 5 of the remaining 23 decisions. We compared the documentation provided to us with the information in the corresponding Federal Register notices. We found no discrepancies. However, we did find minor discrepancies between other Federal Register notices and the documentation the Service provided to us. We reconciled these discrepancies. Additionally, based on a limited review, we found the Service’s procedures reasonable for ensuring that its database contains accurate information. For example, the Service regularly samples data recently added to the database for accuracy. We did not determine the extent to which any of the Service’s final decisions reflected the comments and opinions of the peer reviewers. In addition to determining whether peer reviewers supported the decision they reviewed, we performed a content analysis on the peer-review responses for 16 critical habitat decisions to more fully characterize the opinions of the peer reviewers. We chose to perform a content analysis on the responses to critical habitat decisions because these decisions are open-ended, requiring the Service to determine how much critical habitat to designate and where that habitat should be located. There were 35 peer- review responses for these 16 decisions. To determine how well the Service’s listing and critical habitat decisions are withstanding legal challenges to the science supporting those decisions, we used common legal research methods to identify federal court cases related to the Service’s listing and critical habitat decisions. We identified and reviewed 25 cases that were decided during fiscal years 1999 through 2002 that involved a challenge to a Service listing decision and/or critical habitat decision, and in which the court rendered a decision on the listing or critical habitat issue. To determine the extent to which the Service has delisted species because new scientific information surfaced indicating that listing was not originally warranted, we used information from the Service’s publicly accessible database. We included in our analysis all decisions to delist species from the inception of the act through March 2003. We compared this information with information published in the Federal Register. We found no discrepancies. Finally, to get a fuller understanding of the degree of scientific controversy regarding listing and critical habitat decisions, we solicited the opinions of experts and others and reviewed published studies. To illustrate the nature of scientific controversy regarding listing and critical habitat decisions, we developed a list of decisions for which there was some degree of scientific controversy. We developed this list by asking experts in the private, academic, government, and nonprofit sectors spanning the political spectrum to identify recent decisions that were particularly controversial due to scientific disagreements and briefly explain the nature of the controversy. We limited our analysis to decisions finalized during fiscal years 1993 through 2002. In addition, we asked each expert for the names of other experts who could help us develop our list. We stopped contacting experts when we began to get repetitive responses. We then identified common issues related to the controversies to characterize the types of disagreements involved with each of the decisions. We based this on the information provided by the experts and information published in the Federal Register. Finally, we presented the list of decisions and related information to officials at the Service and at the National Academy of Sciences for their opinions on the list of decisions and how we characterized them. The officials generally agreed with the information we presented. Additionally, in the course of our work, it became apparent that litigation was dominating the Service’s critical habitat program, and we discuss these circumstances in our report. Specifically, we describe how several key court cases are affecting the program. We performed our work from September 2002 through June 2003 in accordance with generally accepted government auditing standards. The Endangered Species Act was passed by Congress to provide a means to conserve the ecosystems upon which endangered and threatened species depend and to conserve and recover imperiled species. The act was passed in 1973 and replaced earlier laws, which provided for a list of endangered species but gave them little meaningful protection. While significant amendments were enacted in 1978, 1982, and 1988, the overall framework of the act has remained essentially unchanged. The Department of the Interior delegated its responsibility for the act to the U.S. Fish and Wildlife Service (Service), which established an endangered species program to implement the requirements of the act. The Service is responsible for all land-dwelling species, freshwater species, some marine mammals, and migratory birds. The Department of Commerce, which has delegated its responsibility to the National Marine Fisheries Service, is responsible for implementing the act for marine species including anadromous (both freshwater and ocean dwelling) fish. The act provides numerous provisions to protect and recover species at risk of extinction. However, before a plant or animal species is eligible to benefit from most of these provisions, it must first be added to the Federal List of Endangered and Threatened Wildlife and Plants. Once on the list, key provisions of the act, including critical habitat, recovery plans, consultations with federal agencies, and habitat conservation plans, are designed to assist in recovering the species so that it can then be removed from the list. Under the act, species may be listed as either endangered or threatened. An endangered species is any species of animal or plant that is in danger of extinction throughout all or a significant portion of its range. A threatened species is any species of animal or plant that is likely to become endangered within the foreseeable future throughout all or a significant portion of its range. All species of plants and animals (except pest insects) are eligible for listing as endangered or threatened. As of June 2003, there were a total of 1,821 listed species; 1,504 species on the endangered species list, 987 of which occur in the United States; and 317 threatened species, 276 of which occur in the United States. The decision to list a species must be based solely on the best available scientific and commercial data. Using these data, the Service must determine whether a species should be listed by analyzing its status based on the following factors: (1) current or threatened destruction, modification, or curtailment of a species habitat or range; (2) over utilization of the species for commercial, recreational, scientific, or educational purposes; (3) disease or predation; (4) inadequacy of existing regulatory mechanisms; and (5) other natural or manmade factors affecting the species’ continued existence. The Service follows a rigorous process to determine whether to list a species. A final decision to list a species is published in the Federal Register. The Service may issue emergency regulations to list a species without complying with the normal regulatory process if it finds that an emergency poses a significant risk to the well-being of any species. Emergency regulations take effect immediately upon publication in the Federal Register and are effective for 240 days. The Service also maintains a list of candidate species. Candidate species are species for which substantial information is available to support a listing proposal, but have not yet been proposed for listing. The Service maintains this list for a variety of reasons, including (1) to provide advance knowledge of potential listings that could affect decisions of environmental planners and developers, (2) to solicit input from interested parties to identify those candidate species that may not require protection under the act or additional species that may require the act’s protections, and (3) to solicit information needed to prioritize the order in which species will be proposed for listing. The Service is required to publish a notice of review annually in the Federal Register to solicit new information on the status of candidate species. The Service works with parties, such as states and private partners, to carry out conservation actions—often called Candidate Conservation Agreements—for candidate species to prevent their further decline and possibly eliminate the need to list them as endangered or threatened. As of June 2003, there were 251 candidate species, many of which have held that status for more than a decade. The Service is generally required to designate critical habitat at the time a species is listed as endangered or threatened. Critical habitat is the specific geographic area essential for the conservation of a threatened or endangered species and that may require special management considerations and protection. Critical habitat contains physical and biological habitat features such as: (1) space for individual and population growth and for normal behavior; (2) cover or shelter, food, water, air, light, minerals, or other nutritional or physiological requirements; (3) sites for breeding and rearing offspring; and (4) habitats that are protected from disturbances or are representative of the historic geographical and ecological distributions of a species. Critical habitat may also include areas not occupied by the species at the time of listing but that are essential to the conservation and recovery of the species. Unlike the decision to list a species as endangered or threatened, a final designation of critical habitat is to be made on the basis of not only the best scientific data available but also taking into consideration the economic and other effects of making the decision. If the benefits of excluding an area outweigh the benefits of including it, the Service may exclude an area from critical habitat, unless the exclusion would result in the extinction of the species. The Service may take up to an additional year after listing a species to designate critical habitat if it finds that critical habitat is “not determinable.” Critical habitat is not determinable when information sufficient to perform the required analyses of the impacts of the designation of critical habitat is lacking or the biological needs of the species are not sufficiently known to permit identification of an area as critical habitat. The Service does not designate critical habitat if it determines that doing so would be “not prudent.” It would not be prudent to designate critical habitat if (1) identifying the habitat is expected to increase the threat to the species or (2) designating an area as critical habitat is not expected to benefit the species. Once a species is listed, the act requires the Service to develop a recovery plan for the species. Recovery plans identify, justify, and schedule the research and management actions necessary to reverse the decline of a species and ensure its long-term survival. Recovery plans must be developed for all listed species, unless such a plan would not benefit the species. Although the act does not specify time frames for developing or implementing the recovery plan or for recovering the species, the Service has as a goal of developing recovery plans within 1 year and having approved plans within 2½ years of a species’ listing. The Service solicits comments from state and federal agencies, experts and the public on draft recovery plans during a formal public comment period announced in the Federal Register. The Service periodically reviews approved recovery plans to determine if updates or revisions are needed. As of June 2003, 1000 species had approved recovery plans. Federal agencies are required to consult with the Service if their actions may affect listed species. The goal of the consultation process is to identify and resolve conflicts between the protection and enhancement of listed species and proposed federal actions. The act requires that all federal agencies consult with the Service to ensure that any activities agencies permit, fund, or conduct are not likely to jeopardize the continued existence of a listed species or adversely modify its critical habitat. Federal agencies may informally consult with the Service to determine whether their actions may affect listed species and must proceed to formal consultations once they determine that their actions may adversely affect a listed species or its habitat. The act requires a formal consultation to be completed in 90 days, unless the Service and the federal agency mutually agree to an extension, with the applicant’s consent. The Service is to issue a “biological opinion” within 45 days of the conclusion of formal consultation that reviews the potential effects of the proposed action on listed species and/or critical habitat. The Service must base the biological opinion on the best available biological information. If the Service finds that the action would appreciably reduce the likelihood of the species’ survival and recovery, it issues a jeopardy biological opinion. Jeopardy opinions include reasonable and prudent alternatives that define modifications to the agency’s proposed action that enable it to continue and still be consistent with the act’s requirements for protecting species. Following the issuance of the biological opinion, the federal agency determines whether it will comply with the opinion or seek an exemption from the act’s requirements. Proposed federal agency actions that have been determined to cause jeopardy to any listed species may receive an exemption from the act by the Federal Endangered Species Committee (also referred to as the “God Squad”). The Endangered Species Committee is comprised of seven members: the Secretary of Agriculture, the Secretary of the Army, Chairman of the Council of Economic Advisors, the Administrator of the Environmental Protection Agency, the Secretary of the Interior, the Administrator of the National Oceanic and Atmospheric Administration, and one individual from the affected state. An exemption is granted if at least five members of the Endangered Species Committee determine that, among other things, the action is of regional or national significance, that the benefits of the action clearly outweigh the benefits of conserving the species, and that there are no reasonable and prudent alternatives to the action. The Endangered Species Committee has been convened only three times since its creation in 1978—the Tellico Dam for the snail darter fish in Tennessee, the Grayrocks Dam in Wyoming for the whooping crane, and Bureau of Land Management (BLM) timber sales for the spotted owl in Oregon. Only two exemptions were granted. One was in regard to the Grayrocks dam and the other was to approve 13 timber sales sought by BLM (which was withdrawn before the completion of appeals). The Tellico dam application was denied but was later allowed by Congress to proceed. In addition, three other applications were received but were subsequently dismissed or withdrawn before deliberations took place. The act generally prohibits any person from “taking” an animal species listed as endangered. “Taking” or “take” means to harass, harm, pursue, hunt, shoot, wound, kill, trap, capture or collect a listed species, and under Service guidelines, includes the destruction of the species’ habitat. However, in 1982, Congress amended the act to include a process whereby the Service may issue permits that allow private individuals to incidentally take listed species. Incidental take is the take of any federally listed species that is incidental to, but not the purpose of, otherwise lawful activities. Permit applicants are required to submit a habitat conservation plan, which includes measures the applicant will take to minimize and mitigate the impacts that may result from the taking. The Service is required to publish a notice in the Federal Register soliciting comments from interested parties on each application for a permit and its accompanying habitat conservation plan. As of April 2003, 416 habitat conservation plans have been approved. The act prohibits the Service from issuing a permit if doing so would appreciably reduce the likelihood of the survival and recovery of the species in the wild. The incidental taking of a listed species resulting from federal agency actions may also be allowed under the act and would be addressed through the consultation process. Apr. 30, 2001 Apr. 15, 2002 Apr. 23, 2002 3 The Service’s peer review policy does not apply to this decision because its most recent comment period opened before the policy became effective. Documentation unavailable. Based on discussions with Service officials, experts, and others knowledgeable about the Endangered Species Act, we found that several scientific disagreements over Service listing decisions have surfaced in recent years—mostly concerning whether the amount of information available at the time a decision is made suffices as a basis for a decision. Regarding critical habitat decisions, we found there has been scientific controversy surrounding whether the areas chosen as critical habitat is sufficiently defined or the overall information used to support the designation is adequate. Although we found that scientific disagreements surrounding listing decisions are not widespread, some of the controversy in recent years can be categorized as “science-related.” Experts and others working with the Endangered Species Act that we spoke with identified 11 species where there was significant scientific controversy surrounding the decisions to list the species. Our discussions with these individuals and a review of related Federal Register notices revealed that the most common scientific disagreements hinge on whether enough information was available to determine (1) whether the plants or animals under consideration qualified as a “species” as defined by the act, (2) the status of the species, or (3) the degree of threat that the species faces. Critics of some listing decisions argued that the Service lacked information to determine whether the entity in question met the definition of a “species.” The act defines a species as including “any subspecies of fish or wildlife or plants, and any distinct population segment of any species of vertebrate fish or wildlife which interbreeds when mature.” There is general agreement within the scientific community as to what constitutes a species and this has not been a major source of controversy in most listing decisions. Disagreements typically arise over whether entities that are genetically, morphologically, or behaviorally distinct, but not distinct enough to merit the rank of species; qualify for protection as a distinct population segment (DPS). Under Service policy, to be identified as a DPS, a population segment must be both discrete and significant. In order to be discrete, the population must be markedly separate from other populations as a consequence of physical, physiological, ecological, or behavioral factors. If a population segment is considered discrete, its biological and ecological significance will then be considered. This consideration would include such factors as evidence that the loss of the population would result in a significant gap in the range of a species. For example, disagreement surrounded the decision to list the population of the Sonoma County California tiger salamander, a large terrestrial salamander that is native to California. According to critics of the listing decision, the results of genetic testing did not show the salamander to be distinct, or discrete, from other populations of the California tiger salamander and therefore the population did not qualify as a DPS. The Service disagreed with the critics’ interpretation of the data, stating that it believed the data referred to by the critics show the salamander to be distinct from other populations. The Service said that additional sampling and genetic work provided further substantial evidence of the genetic discreteness of the population. Additionally, the Service relied on the salamander’s geographic isolation in making a determination that the population qualified for protection as a DPS. Service policy also allows international governmental boundaries that delineate differences in the management of the species or its habitat to be used to determine if a species meets the discrete criterion. Some critics have argued against using international boundaries as a criterion to define a DPS. For example, critics of the decision to list the Arizona population segment of the cactus ferruginous pygmy-owl stated that the Service had no biological or regulatory authority to rely on international boundaries to draw a distinct population segment. The pygmy-owl is a small bird that occurs in the southwestern United States extending south into Mexico. The Service recognizes that using international boundaries as a measure of discreteness may introduce a nonbiological element to the recognition of a distinct population segment. However, in its policy, the Service determined that it is reasonable to recognize units delimited by international boundaries when these units coincide with differences in the management, status, or exploitation of a species. In the case of the pygmy-owl, the Service reported the status of the owl in the United States is different from that in Mexico, and Arizona is the only area within which the government of the United States can affect protection and recovery for the species, so it was appropriate to protect the pygmy-owl as a DPS. In its review of science and the Endangered Species Act, the National Research Council found that although it may be appropriate to delineate population segments based on political boundaries, there are no scientific reasons to do so as these boundaries often do not always coincide with major natural geographic boundaries. To provide more scientific objectivity in identifying distinct population segments, the Council recommended that the Service define a distinct population segment based solely on scientific grounds and limit the definition to segments of biological diversity containing the potential for a unique evolutionary future. Such segments would be determined by looking at such factors as a population’s morphology (or physical appearance), behavior, genetics and geographical separation or isolation from other populations. Service officials agree that the inclusion of international boundaries in determining whether a population segment is discrete is sometimes undertaken as a matter of policy rather than science. However, the Service believes that using international borders is appropriate and necessary to comply with congressional intent. When there are international boundaries that coincide with differences in the management, status, or exploitation of a species, as described above, the Service stated that it is appropriate to recognize these borders when making a listing determination. Scientific disagreement also surrounds the status of a species and the degree to which identified threats imperil it. When making a listing determination, the Service must evaluate a species’ status, such as where it occurs or its population numbers, and the degree of threat it faces. The Service can determine that a species is threatened or endangered because of any of several factors such as the destruction of habitat, disease or predation, or other natural or manmade factors affecting the species’ survival. Several of the scientific disputes that we encountered centered on how widespread the species in question is or how intense or significant the threats to the species are. For example, state agencies commenting on the proposal to list the Canada lynx said that the rule failed to demonstrate there were significant reductions to the species’ population. Critics of the rule said that the scientific information—which was largely in the form of one comprehensive report—failed to assess lynx population size, status, and trends. The Service agreed that the available information concerning lynx population status, trends, and historic range is limited. However, after reviewing historic and current records for both Canada and the United States, sightings and track records, personal communications with lynx, hare, and forest ecology experts, and a review of all available literature, the Service said it was able to make several conclusions about the status of the lynx and found that it warranted listing as threatened. Additionally, critics of the proposal to list the lynx claimed that the Service failed to demonstrate significant threats to the lynx’s survival. For example, some stated that there is little evidence to support claims that current management practices, including timber harvesting and human access, adversely affect the lynx. While the Service acknowledged the lack of quantifiable information to determine whether some of the possible threats have or would have resulted in lynx declines, it concluded that the factor threatening lynx in the contiguous United States is the lack of guidance in existing federal land management plans for conservation of lynx and lynx habitat. Service officials told us that it is important to consider both the threats and the status of the species when making a listing determination. For example, if only a species’ population numbers were considered, it might appear to be abundant. Once the threats are factored in, however, the species might be threatened or endangered. On the other hand, if the species numbers are low but the species faces no considerable threats, it may not warrant protection under the act. Experts and others we spoke to identified 10 species where there was scientific controversy concerning the decision to designate critical habitat for them. For example, one concern is whether the area chosen as critical habitat is sufficiently defined or the overall information used to support the designation is adequate. Most of the identified species are widespread or occur in rapidly developing areas, such as southern California. One of the major sources of disagreement is the way in which the Service identifies land to be included in critical habitat. The Service is required to designate as critical habitat those areas that it deems essential to a species’ conservation and that may require special management considerations and protection. To reach this conclusion, the Service describes the species’ habitat needs for conservation, or the species’ “primary constituent elements,” such as nesting or spawning grounds, feeding sites, or areas with specific geologic features or soil types. The Service’s regulations also require the delineation of critical habitat using reference points and lines as found on standard topographic maps of the area. The Service uses written descriptions and/or maps to outline the areas it considers critical habitat for a listed species. In some cases, when maps are used to outline the area, parts of the area that fall within the mapped boundaries do not contain the primary constituent elements defined by the Service. For example, building structures, roads, or other major structures, such as an airport, may fall within the mapped boundaries of critical habitat, but are not suitable habitat. The Service maintains that these areas would not be considered critical habitat because they do not contain the primary constituent elements needed by the species. The Service stated that the precise mapping of critical habitat boundaries is impractical or impossible because the legal descriptions for these precise boundaries would be unwieldy. The scientific controversy surrounding many of the critical habitat proposals that we reviewed stems from disagreement or confusion over which areas within the land outlined by the Service would count as critical habitat. Critics responding to these proposed rules often complained that the Service’s definitions of primary constituent elements were vague or too broad to be useful. Additionally, several critics found the Service’s assertion that only areas containing primary constituent elements would be considered critical habitat to be confusing, noting that it did not allow for a discrete boundary. In some instances, landowners voiced concerns that their property fell within proposed critical habitat boundaries even though the land did not seem to contain the primary constituent elements. For example, critics of the proposed critical habitat of the California red-legged frog stated that the Service’s description of the critical habitat was vague and did not specifically identify the locations of the frog’s habitat. Critics of the rule stated that the proposal was confusing and that landowners would be forced to survey for the frog when undertaking a project. Such an action, they contended, is improper because it places the onus on private landowners to make sure their land does not contain critical habitat. The Service stated that due to the mapping unit it used it was not able to exclude all nonessential lands, such as roads. According to the Service, because these areas do not contain the primary constituent elements, federal agencies would not be required to consult the Service before taking action. We also identified scientific disagreement stemming from designations made for species that require dynamic habitats. Designating critical habitat, which requires selecting a fixed habitat area, can be particularly difficult when a listed species may require a habitat that is dynamic, or changing, in nature. For example, lands that have been burned, cleared, or otherwise disturbed may be essential to a species or may be important for only certain periods of a species’ life cycle. Many landscapes change because of natural causes, such as the age and make-up of a forest, and therefore it may be difficult to designate one particular area as habitat because the area may change over time, causing a change in the value of the habitat for the listed species. For example, scientific disagreement surrounded the critical habitat designation of the Southwestern willow flycatcher partly because of the bird’s changing habitat requirements. Comments received on the proposed critical habitat rule stated that because riparian habitats are in a constant state of change, any boundaries defined as critical habitat would also be subject to change. Further, according to critics, the boundaries described by the Service did not meet regulatory requirements because they were difficult to interpret and could change seasonally. In the final rule designating critical habitat, the Service agreed that its original boundaries of critical habitat did not incorporate the dynamic nature of riparian systems. To resolve this issue, the Service stated that the final boundaries would be established in accordance with the 100-year flood zone, which would include most changes in stream flow and most seasonal changes. In addition to controversy surrounding the identification of specific areas for critical habitat, many critics of the proposed rules that we reviewed argued that the Service had insufficient information on which to base its determination and that the Service should not designate critical habitat until the habitat requirements of the species could be better defined. Other critics objected to the Service’s use of unpublished or otherwise unavailable data, stating that this type of information is inadequate to support critical habitat designations. Service officials said that they have been required to complete critical habitat decisions under short time frames because of court-imposed deadlines. According to Service officials, given the resource and time constraints under which Service scientists work, scientists are often unable to collect new information and agree that the information available may be limited. Thus, the Service relies on both unpublished and published information and will use whatever scientific information it deems credible to help make a determination. In addition to the individual named above, Bob Crystal, Charlie Egan, Doreen Stolzenberg Feldman, Alyssa M. Hundrup, Nathan Morris, and Judy Pagano made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Recent concerns about the U.S. Fish and Wildlife Service's (Service) endangered species listing and critical habitat decisions have focused on the role that "sound science" plays in the decision-making process--whether the Service bases its decisions on adequate scientific data and properly interprets those data. In this report, GAO assesses the extent to which (1) the Service's policies and practices ensure that listing and critical habitat decisions are based on the best available science and (2) external reviewers support the scientific data and conclusions that the Service used to make those decisions. In addition, GAO highlights the nature and extent that litigation is affecting the Service's ability to effectively manage its critical habitat program. The Endangered Species Act requires the U.S. Fish and Wildlife Service to identify, or "list," species that are at risk of extinction and provide for their protection. The act also generally requires the Service to designate critical habitat--habitat essential to a species' conservation--for each listed species. The Service must use the best available science when making listing and critical habitat decisions. The Service's policies and practices generally ensure that listing and critical habitat decisions are based on the best available science. The Service consults with experts and considers information from federal and state agencies, academia, other stakeholders, and the general public. Decisions are subject to external "peer review" and extensive internal review to help ensure that decisions are based on the best available science and conform to contemporary scientific principles. External reviews indicate that the Service's listing and critical habitat decisions generally have scientific support, but concerns over the adequacy of critical habitat determinations remain. Listing decisions are often characterized as straightforward, and experts, peer reviewers, and others generally support the science behind these decisions. Critical habitat designations, on the other hand, are more complex and often require additional scientific and nonscientific information. As a result, peer reviewers often expressed concern about the specific areas designated, while other experts expressed concerns about the adequacy of the data available to make designations. The Service's critical habitat program has been characterized by frequent litigation. Specifically, the Service has lost a series of legal challenges that will require significant resources for the next 5 fiscal years to respond to court orders and settlement agreements for designating critical habitat. As a result, the Service is unable to focus resources on activities it believes provide more protection to species than designating critical habitat. While the Service recognizes that it has lost control of the program, it has yet to offer a remedy. Without taking proactive steps to clarify the role of critical habitat and how and when it should be designated, the Service will continue to have difficulty effectively managing the program.
A fundamental difficulty in discussing federal agencies’ certification requirements is that there is no official definition of the term in the federal government. In fact, a NIST official told us that there are almost as many definitions of a federal certification program as there are federal agencies. Different organizations may also use other terms to refer to the concept of certification, such as accreditation, registration, approval, or listing. These terms have specific and different meanings in some contexts but are used interchangeably in others. In any case, the nomenclature can be confusing. For example, in 1989 we reviewed laboratory accreditation requirements for 20 different programs and found that these programs used 10 different terms for accreditation, with at least 18 different meanings. Certification, accreditation, recognition, conformity assessment, and related terms all refer to types of standards-related activities, so a definition of “standards” can serve as a useful starting place. The International Organization for Standardization (ISO) defines standards as documented agreements containing technical specifications or other precise criteria to be used consistently as rules, guidelines, or definitions of characteristics to ensure that materials, products, processes, and services are fit for their purpose. ISO defines certification as the procedure by which a third party gives written assurance that a product, process, or service conforms to specified requirements or standards. Accreditation, according to ISO, refers to the procedure by which an authoritative body gives formal recognition that a body or person is competent to carry out specific tasks. In the context of certification, an accreditation body might accredit a certification body, such as a testing laboratory, as competent to carry out certification activities—in a sense, certifying the certifiers. Recognition is a term that is relatively new to conformity assessment activities in the United States, and it refers to designation by a government entity that an accreditation program is competent. Conformity assessment is the broadest term for these types of activities. According to the National Academy of Sciences, conformity assessment is the determination of whether a product or process conforms to particular standards or specifications. It may include such activities as sampling, testing, inspection, certification, registration, accreditation, and recognition. There are a great many standards or criteria for product quality, process reliability, or professional competence. NIST estimated that in the United States alone, approximately 49,000 voluntary standards have been developed by more than 620 organizations. The agency said this estimate does not include “a much greater number of procurement specifications. . . as well as mandatory codes, rules, and regulations containing standards developed and adopted at federal, state, and local levels.” NIST also pointed out that numerous foreign, regional, and international organizations produce standards of interest and importance to American businesses. For example, ISO has issued more than 10,000 international standards. Agency officials told us that use of these and other international standards has become increasingly common in the United States. The standards underlying certifications cover a wide range of products, processes, and professions. Some are product quality or safety standards, such as the American National Standards Institute (ANSI) standard for manually operated gas valves or the UL standard for communications cables. There are also standards for the performance and reliability of particular processes, as in ISO standards for quality management systems. Professional standards, such as the American Medical Association’s standards in medical practice, research, and education, are used to assure the qualifications and competence of individuals in specific disciplines or fields. The preceding examples also illustrate that standards can come from many sources. They can be established by industry or professional consensus standard-setting bodies, by governments through statutes or regulations, or by international standard-setting bodies. Certifications of products, processes, and services provide information on whether they can meet certain levels of quality, safety, or performance. However, certifications of people or organizations focus on an evaluation and designation of competence and qualifications. In professional and technical fields, certifications confirm the skills and knowledge of individuals who meet specific requirements (e.g., a certified public accountant). The professional certification process typically involves passing examinations and meeting other educational and/or experiential requirements. The choice of standards, the type of certification program, and the certification methodology used to assess conformity all have a significant impact on the validity and value of the information provided by a given certification. The total number of certification programs in the United States is unknown, but NIST has identified at least 178 private sector organizations that have product certification programs. In addition, the National Organization for Competency Assurance (NOCA) has identified at least 1,700 organizations based in the United States with programs for the certification or accreditation of individuals. The National Academy of Sciences and NIST have each noted that there is no central coordination of conformity assessment and related activities in the United States. Perhaps as a result, certification requirements can be duplicative and costly for those who must be certified or accredited. The fees for each certification exam can range from a few hundred dollars to over a thousand dollars, and the associated costs for annual fees and recertification in future years may be substantial. NIST officials told us that some laboratories must obtain multiple different accreditations—which often evaluate many of the same common elements in their evaluation processes—in order to provide testing services. NIST had found that laboratories desiring to be accredited or designated nationwide to conduct electrical safety-related testing of construction materials had to gain the acceptance of at least 43 states, over 100 local jurisdictions, the International Conference of Building Officials, the Building Officials and Code Administrators, the Southern Building Code Congress International, a number of federal agencies, and several large corporations. Congress has attempted to address some of the concerns about redundant certification requirements. For example, the Technology Transfer and Advancement Act of 1995 requires greater coordination of conformity assessment activities and attempts to facilitate mutual recognition among conformity assessment programs. Also, in June 1999 Congress amended the Fastener Quality Act in part to address concerns about potentially burdensome, costly, and duplicative testing and certification procedures that would have been imposed on industry. The amended law no longer requires NIST to approve organizations that accredit fastener testing laboratories. The amendments also exempt those fasteners already subject to the Federal Aviation Administration’s (FAA) regulation. However, despite such concerns, it also should be recognized that some certification programs and requirements foster opportunities for small businesses. For example, the Nationally Recognized Testing Laboratory (NRTL) Program implemented by the Occupational Safety and Health Administration (OSHA) recognizes private sector laboratories that meet the necessary qualifications specified in program regulations. OSHA officials pointed out that this program has given a number of small testing laboratories in the United States the opportunity to provide types of services that only a few organizations provided before the program went into effect. Our objectives in this review were to describe (1) the extent and variety of certification activities in the federal government; (2) the extent to which there are policies, procedures, or guidance governing those activities, either governmentwide or within selected agencies; and (3) an agency certification procedure that could serve as an example or “best practice” for other agencies. To address these objectives, we interviewed officials and obtained documentation from five federal agencies in which the Committee had expressed an interest: the Departments of Transportation (DOT) and Veterans Affairs (VA); and, within the Department of Health and Human Services, the Centers for Disease Control and Prevention (CDC), FDA, and the National Institutes of Health (NIH). We also contacted officials in the Office of Management and Budget’s (OMB) Office of Information and Regulatory Affairs, NIST, and the Office of Government Ethics (OGE) because of their responsibilities related to the issue of certification. We also interviewed and obtained documents from officials of NOCA and its related accreditation body, the National Commission for Certifying Agencies (NCCA). There are some important scope limitations to our review. Although we defined the term certification broadly to include such issues as accreditation, recognition, and conformity assessment, the report does not cover those Federal Acquisition Regulation certifications (e.g., the Certification of Final Indirect Costs and the Certification of Nonsegregated Facilities) that might be included as standard solicitation provisions and contract clauses but are not related to conformity with technical or professional standards. The scope of our first objective was governmentwide. However, as agreed with the Committee, it was not our intention to develop a comprehensive listing of every possible certification-related activity and requirement of federal agencies. Our intent was to illustrate the extent and variety of such activities in the federal government. As agreed with the Committee, our review of agency- specific policies, procedures, or guidance under the second objective was limited to selected agencies, including CDC, DOT, FDA, NIH, and VA. To address our third objective, we again focused primarily on specific certification examples from the five selected agencies. The examples cited in agencies other than the five selected for more in-depth review were limited to ones cited in published reports or suggested by persons we interviewed. We obtained only limited information on the certification requirements in agency procurement actions. Our choices of examples to highlight as best practices represent subjective decisions based on our observations and work in the regulatory arena. We conducted this review between November 1998 and August 1999 at the headquarters offices of the above-mentioned agencies in the Washington, D.C., area in accordance with generally accepted government auditing standards. We provided a draft of this report to the Secretaries of Commerce, Health and Human Services, Transportation, and Veterans Affairs and the Director of OMB for their review and comment. Their responses are presented at the end of this letter, along with our evaluation. Federal agencies engage in both a large number and a wide variety of certification-related activities. The certifications differ across several dimensions, including the origins of the requirements, their targets, which entity or entities do the certifying, whether the certifications are mandatory or voluntary, and the extent to which there is reciprocity with or recognition of other certifications or other organizations’ requirements. The extent of agency involvement in the process can also vary, ranging from instances in which an agency might simply apply a certification requirement established by other entities to cases in which the agency is actively involved in developing and enforcing a specific requirement. We did not attempt to develop a compendium of every federal agency certification or certification-related activity and requirement, and it would be difficult to do so given the absence of a common understanding and definition of the term “certification requirement” in the federal government. However, it is clear that federal agencies engage in a large number of certification-related activities. For example, NIST publishes directories that list more than 200 federal procurement and regulatory programs in which agencies provide or require some form of certification. The NIST directories provide only a partial inventory of agencies’ activities, though, because they primarily focus on certification of products and services. Also, the directories do not cover individual procurement opportunities in which agencies require a vendor or contractor to have a particular certification, accreditation, or registration in order to participate. Agencies’ certification requirements also vary in a number of ways, reflecting the variety of the underlying standards. One such dimension is the scope of the certification programs and requirements. For example, FAA’s comprehensive system of certifications for the civil aviation system is quite broad, covering numerous categories of equipment, personnel, and facilities. On the other hand, one of the Environmental Protection Agency’s (EPA) requirements (pursuant to section 609 of the Clean Air Act) is very specific, focused solely on operators who service motor vehicle air conditioners and requiring them to be certified under an EPA-approved program before offering their services. Another narrowly focused certification requirement is in FDA regulations that are designed to prevent botulism. The regulations require that a “processing authority” must certify the competency of “low-acid canned food retort operators” (i.e., the operators of heating and pressure cookers). Federal agencies’ certification-related activities also vary with regard to the extent of agency involvement in the certification process. For example, an agency might be deeply involved in developing and/or enforcing a specific certification requirement. On the other hand, the agency might simply apply a requirement established by other entities, such as when an agency incorporates technical or professional certification requirements by reference in solicitations for specific products or services. Other ways that certification requirements vary include the following. The target of certification. Product Profession Process Facility or organization Who does the certification. Federal government State or local governments Joint commissions Private sector, professional, or trade organizations Self-certification The origin or basis of the certification requirement. Statutory requirements Agency regulatory actions International agreements Industry consensus or nonconsensus requirements Procurement actions The degree of compulsion on those being certified. Voluntary Mandatory Required for program participation Whether other certifications are accepted or recognized. Only the specified certification is accepted Other certifications accepted or recognized According to NIST officials, the risk associated with a particular regulatory action or procurement can be an important factor influencing choices within these various dimensions. If the perceived risk is low, for instance, an agency might determine that certification is voluntary and accept a manufacturer’s self-certification. However, if the risk associated with failure to meet standards is serious, the agency might choose to make certification mandatory and accept certification from only a federally recognized laboratory. Appendix I describes a number of specific agency certification programs and requirements that illustrate these kinds of differences. Some of the requirements differ on multiple dimensions. For example, the National Marine Fisheries Service within the Department of Commerce has a program for the inspection and certification of seafood products and processing operations. The Seafood Inspection Program is a voluntary program carried out pursuant to the Agricultural Marketing Act of 1946, as amended; involves inspection by licensed federal and state agents; and provides certification recognized by other federal, state, and foreign government agencies as well as some private and international organizations. In contrast, a provision in an NIH procurement solicitation stated that a prospective contractor’s supervisors responsible for inspection of the agency’s biohazard cabinets “must be NSF accredited biohazard cabinet field certifiers.” This provision is based on an industry consensus standard, targets professional competence, involves accreditation by a private sector third party, represents a mandatory requirement for prospective contractors, and recognizes only one source for the certification. Federal procurement law establishes some legal boundaries on the certification requirements used in federal procurement. In addition, agency officials pointed out that their general procedures and practices for rulemaking and procurement can serve a useful role in notifying the public and soliciting feedback on proposed certification requirements. However, there is little in the way of general policies, procedures, or guidance governing how agencies should establish certification requirements or select certification bodies, except at the level of some individual agency programs. Agency officials told us that they primarily viewed certification as an industry or professional concern rather than as a federal issue, and therefore they tended to rely on the “industry standard” or “nationally recognized” requirements. NIST has prepared draft guidance for federal agencies on conformity assessment activities, including certification. This guidance is currently under review at OMB, and NIST expects to publish it for public comment later this year. The Competition in Contracting Act of 1984 provides that a solicitation for a government contract may include a restrictive provision only to the extent that the provision is authorized by law or is necessary to satisfy the agency’s needs. Some agency-specific acquisition regulations mirror the Competition in Contracting Act’s limitations on the use of unnecessarily restrictive certification requirements. For example, VA’s regulations allow requirements that offerors conform to technical standards that are generally recognized and accepted in the industry involved. However, if there is a choice of laboratories available to certify the quality of the product involved, the regulations also say that the requirements must not indicate that only one laboratory’s certificate will be acceptable. In our bid protest decisions, we have generally not objected to a requirement that an item conform to a set of standards adopted by a nationally recognized organization in the field or a requirement for independent laboratory certification that such standards are met. However, we have found requirements unduly restrictive if they require approval by specific organizations without recognition of equivalent approvals. The absence of an endorsement by a particular private organization should not automatically exclude offers that would otherwise meet a procuring agency’s needs. These procurement provisions notwithstanding, there is little in the way of general policies, procedures, or criteria governing how agencies should proceed in establishing certification requirements or selecting certifying bodies. Neither the agency officials we interviewed nor agency documents we reviewed identified any governmentwide guidance or, for the selected agencies we reviewed, agencywide guidance focused specifically on certification activities. The only specific certification guidance that we could identify was limited to particular programs. In some of these programs—such as FDA’s Mammography Program; the Coast Guard’s requirements for vessel design, inspection, and certification; and OSHA’s NRTL Program—the agencies have established detailed procedures and criteria governing their certification requirements and/or the selection of certifying bodies. In general, however, officials in the five agencies that we contacted tended to view certification as an industry or professional issue rather than a federal one. Consequently, the agencies’ selection of specific certification requirements or certifying organizations were driven more by the particular profession, industry, or market sector involved than by federal considerations. For example, officials from VA and NIH said that their agencies commonly rely on national consensus bodies and their “nationally recognized” or “industry standard” certifications for a given sector. NIST officials said that a common finding from their meetings and workshops is that people tend to use the certification or accreditation program with which they are most familiar. NIST has taken a first step toward developing governmentwide certification guidance. In response to requirements in the National Technology Transfer and Advancement Act of 1995 and OMB Circular A- 119, and with input from the Interagency Committee on Standards Policy (ICSP), NIST has prepared draft guidance for issuance by the Secretary of Commerce on conformity assessment activities, including certification.This draft guidance is currently under review at OMB, and NIST expects to publish it in the Federal Register for public comment later this year. NIST officials explained that the guidance would apply to all agencies that set policy for, manage, operate, or use conformity assessment activities and results, both domestic and international, except for activities carried out pursuant to international treaties. In addition to suggesting common terminology and definitions for agencies to use, NIST expects the guidance to define agency responsibilities in a number of areas, including the following: identifying private sector conformity assessment practices and programs and considering use of the results of such practices or programs in new or existing regulatory and procurement actions, using relevant guides or recommendations for conformity assessment practices published by domestic and international standardizing bodies, and working with other agencies to avoid unnecessary duplication and complexity in federal conformity assessment activities. However, NIST officials also pointed out that the guidance would not preempt the agencies’ authority and responsibility to make regulatory or procurement decisions authorized by statute or required to meet programmatic objectives and requirements. They also said the guidance would not suggest that agencies explain why they selected one certification requirement or organization over other possible candidates. Although there is currently no governmentwide guidance specifically on certification requirements, agency officials noted several related policies and procedures that can affect those requirements. Those policies and procedures include OMB Circular A-119, federal ethics and conflict-of- interest laws, and agencies’ rulemaking and procurement procedures and regulations. OMB Circular A-119 says that all federal agencies must use voluntary consensus standards in lieu of government-unique standards in their procurement and regulatory activities, except where inconsistent with law or otherwise impractical. If an agency uses government-unique standards, it must explain why it did so in a report to OMB through NIST. The circular also says that agencies must consult with voluntary consensus standards bodies, both domestic and international, and must participate with such bodies in the development of voluntary consensus standards “when consultation and participation is in the public interest and is compatible with their missions, authorities, priorities, and budget resources.” Agency officials from each of the selected agencies we reviewed noted that employees of their agencies commonly participate in such consensus bodies, including ones that help to establish certification requirements. Agency employees who, at government expense, participate in such activities on behalf of the agency must do so as specifically authorized agency representatives and are subject to ethics laws regarding participation by federal employees in activities of outside organizations. However, according to the Office of Government Ethics, there is no conflict of interest if an authorized agency representative participated in developing a voluntary consensus standard and the agency subsequently selected that standard as a requirement. Circular A-119 does caution, however, that agency participation in voluntary consensus bodies does not necessarily connote agency agreement with, or endorsement of, decisions reached by such organizations. The circular does not apply to conformity assessment activities carried out pursuant to treaties, which may impose their own obligations on federal agencies. NIST officials pointed out that the World Trade Organization (WTO) Agreement on Technical Barriers to Trade, in particular, includes conformity assessment obligations that apply to federal agencies. According to WTO, the intent of this agreement is to ensure that regulations, standards, testing, and certification procedures do not create unnecessary obstacles to trade. The agreement includes articles regarding procedures for assessment of conformity and recognition of conformity assessment by central government bodies. For example, the agreement encourages countries to recognize each other’s testing procedures. Members of WTO are also encouraged to permit conformity assessment bodies located in the territories of other members to participate in their conformity assessment procedures under conditions no less favorable than those accorded to bodies within their own territories or the territories of any other countries. Agency officials also said that their general procedures and regulations governing rulemaking and procurement play an important role in certification activities. In particular, they noted that such procedures and regulations provide valuable opportunities for an agency to inform the public and solicit feedback on proposed certification requirements. FDA officials said their agency’s procedural rules and regulations require them to use rulemaking in order to establish an enforceable certification requirement. DOT and FDA used the rulemaking process in developing or implementing several of the agencies’ certification requirements. Although DOT and FDA officials acknowledged that rulemaking procedures take considerable time and effort, they noted that those procedures could also help the agencies obtain informed comments and document certification decisions. DOT officials said the use of the rulemaking process was particularly valuable in the establishment of certification requirements for subjects that are new to the department or in which DOT has little expertise. For example, proposed departmental regulations intended to reduce alcohol misuse by employees in DOT-regulated transportation industries included important roles for substance abuse professionals (SAPs). In response to public comments on the proposed rule, DOT refined and expanded its definition of SAPs in the final regulations and said alcohol and drug abuse counselors certified by the National Association of Alcoholism and Drug Abuse Counselors (NAADAC) Certification Commission could serve as SAPs. However, agency officials also emphasized that rulemaking may not always be a necessary or appropriate procedure for making certification decisions. In particular, NIH and CDC officials distinguished their research-oriented agencies from regulatory agencies, noting that they tend to act through nonmandatory guidance or recommendations, not through rulemaking. DOT officials said that they generally do not use rulemaking procedures if certification requirements are part of a one-time procurement or contract. However, they said rulemaking might be the appropriate approach if the requirements are part of a recurring procurement. Agency officials also noted that procurement procedures can play a role in their agencies’ choice of certification requirements and certifying organizations. Contracting officials emphasized the opportunities provided throughout the procurement process for prospective bidders to question proposed certification requirements and to suggest changes or other equivalent certifications that might meet the agency’s needs. NIH officials noted that in addition to responding to the solicitation itself, bidders can comment on the draft request for proposal (published to see if there are enough sources) and the announcement of forthcoming solicitations to the market that appears in the Commerce Business Daily. Officials from CDC, FDA, and NIH pointed out that any solicitation could be the subject of bid protests if their agencies used procurement provisions that some entities believed were too restrictive. As noted previously, agency certification actions are numerous and vary substantially. Therefore, specification of a particular certification “best practice” would likely depend on the context of the certifications. Rather than attempting to develop criteria for selecting among these procedures, we focused on one practice that we have supported in the regulatory arena—transparency, or clearly describing the basis for agency decisionmaking. Transparency in certification decisionmaking is important because those decisions can have significant implications for affected parties, but they are sometimes made with little public explanation. An agency’s certification decisions can be transparent either retrospectively (explaining why a decision has been made) or prospectively (explaining the criteria it will use in making future decisions). As noted previously, OMB Circular A-119 requires agencies that develop government-unique standards to explain why they did not use voluntary consensus standards. However, we are not aware of any statutory or regulatory provisions requiring agencies to disclose why they selected one voluntary standard, certification, or certifying organization over another, or to describe the criteria they will to use to make those decisions in the future. The transparency of the agency certification actions that we reviewed varied dramatically. In some instances, the agencies clearly documented the criteria that they used or planned to use to select particular requirements or certifying organizations. Other certification decisions were not as transparent, with the criteria less clear or well documented. However, agency officials were able to provide us with justifications for their actions in these instances during our review. FDA’s certification requirements in its previously mentioned Mammography Program are very transparent. The program’s regulations published in the Federal Register provide detailed procedures and criteria for certification of personnel and facilities providing mammography services, as well as the procedures and criteria that FDA uses to approve accreditation bodies. FDA has developed and publicized the regulations through a series of public rulemaking notices, building on procedures and criteria promulgated in earlier regulations issued by the Department of Education and the Health Care Financing Administration within the Department of Health and Human Services. The agency also provides ongoing guidance on the implementation of this program and its requirements, notifying the public of any updates in the guidance through quarterly Federal Register notices that announce the availability of and changes in FDA guidance documents. DOT has also clearly explained in several of its rulemaking documents how it made or planned to make decisions on the selection of particular certifying organizations. For example, in a 1997 final rule, the Coast Guard allowed an alternative inspection compliance method to fulfill requirements for vessel inspection and certification. Previously, these inspections and certifications had to be performed by the Coast Guard. Under the alternative, the Coast Guard can issue a certificate of inspection based upon reports by a “recognized, authorized classification society” that a vessel complies with United States and international safety rules, conventions, or other specified requirements. In order to receive recognition from the Coast Guard, the regulation requires a classification society to meet 23 specific criteria. In DOT’s previously mentioned substance abuse-prevention program, the department’s rulemaking notices clearly documented the department’s reasons for selecting or rejecting particular certifying bodies. Although DOT did not describe the specific criteria it would use to accept or reject professional certifications at the time it issued the proposed rule, the department’s response to public comments in the final rule clearly described why it accepted certification by NAADAC and rejected state certifications. DOT noted that NAADAC was a national organization and that commenters provided information showing that the training and experience needed to meet NAADAC standards and certification requirements were sufficient for participation as a SAP in DOT’s alcohol misuse prevention programs. DOT said it rejected suggestions that the SAP definition include state-certified counselors because qualification standards varied dramatically by state and did not always result in state- certified counselors having the experience or training DOT deemed necessary to implement the objectives of its rules. However, the reasoning behind some other agency certification requirements that we examined was not as clearly documented or otherwise explained. These specific cases involved the selection of particular certification bodies, and organizations that were not selected raised questions about the criteria that the agencies used. One such example was VA’s implementation of new procedures, effective July 1, 1997, generally requiring that newly hired physicians be board-certified in the clinical specialty in which they will practice. The VA Undersecretary for Health later specified that the only certifying bodies recognized by VA for this purpose would be the American Board of Medical Specialties (ABMS) for allopathic physicians and the Bureau of Osteopathic Specialists (BOS) for osteopathic specialists. Although the subsequent announcement indicated that the two organizations were “umbrella organizations for approving medical specialty boards in the United States” and described the importance of board certification, the announcement did not indicate why these organizations were selected. Another certifying organization (the American Association of Physician Specialists, Incorporated) and the House Committee on Veterans’ Affairs then questioned why VA recognized only ABMS and BOS certifications. The Committee requested that VA provide the criteria used to evaluate and select those two organizations. In its response to the Committee, VA stated that certifying groups vary widely in their requirements and that ABMS and BOS are “the standard certifying organizations recognized throughout American medicine.” However, VA did not further describe why it selected these two certifying organizations. VA officials told us during this review that they rely on consensus practices and standards of the health care profession in establishing certification requirements. They said VA’s use of ABMS and BOS certifications can be traced back to a 1980 decision by the Chief Medical Director to accept ABMS and BOS physician board certifications for Incentive Special Pay purposes. In 1997, VA extended those same certifications that were required for special pay purposes to employment, “grandfathering” currently employed physicians. VA officials also noted that they had canvassed other federal agencies involved in health care issues—including the Department of Defense, the Public Health Service, NIH, CDC, and the Bureau of Prisons—and found that essentially all recognized ABMS and BOS as the two accepted organizations for board certification purposes. The officials also described to us some of their expectations of a health professional certification program—in essence, informal selection criteria. These included (1) accreditation for educational requirements (undergraduate, medical school, and residency program); (2) accreditation for post-residency experience; and (3) certifying exams in the area of specialty. Finally, they pointed out that by law, the Secretary for Veterans Affairs has special authority to make personnel decisions. Although the description that VA officials provided explains how ABMS and BOS were selected, it was not contained in any published document and did not explain what criteria other organizations would need to meet to be accepted by VA. Another agency certification decision that was not transparent to affected entities involved a 1996 NIH solicitation for the maintenance, certification, and decontamination of certain types of facilities and equipment, including biological safety cabinets. NIH implicitly designated NSF International as the sole certifying organization by including a requirement in the solicitation that the full-time on-site supervisor for specific locations be an NSF-accredited biohazard cabinet field certifier. NIH did not explain why only NSF accreditation was acceptable. As in the VA example, other certifying organizations that were not designated raised questions about the restriction to NSF’s program. NIH officials told us during our review that NSF had the only accreditation program that was nationally recognized. The officials also pointed out that they applied the restrictive provision as narrowly as possible—requiring accreditation only for supervisors—while still addressing the agency’s primary need to protect the safety of NIH personnel. Agencies can make clear the criteria they used or plan to use to select a particular certification requirement or certifying organization in any number of ways. However, those cases that we reviewed in which agencies clearly documented the criteria they used to select certifying organizations appeared to have certain common elements. For example, in most of these cases the agencies included discussions of elements such as the following: the structure, purpose, and other characteristics of the organization (e.g., its legal status and composition of the governing board); the resources and qualifications of the organization (e.g., technical competence of the staff, adequacy of management and quality control systems, and appropriate experience); the certification procedures or mechanisms used by the organization (e.g., public documentation, use of valid test or evaluation methods, enforcement of certification requirements, and appeals or due process procedures regarding certification decisions); and other factors (e.g., compatibility with or recognition of related certifications and the costs and fees associated with certification). However, transparency is not free. The Director of CDC’s Procurement and Grants Office told us that a governmentwide requirement for complete documentation of each agency certification action would carry with it certain costs, including possible delays in procurement and the issuance of agency rules. The Director pointed out that the relative infrequency of concerns expressed about agency certification requirements could mean that those costs could exceed the benefits derived from documentation requirements. He also noted that mechanisms are already in place to address concerns about restrictive solicitation provisions and said that agencies will probably hear from affected entities if the requirements are considered unreasonable and/or restrictive of competition. Finally, both he and DOT officials emphasized that no one uniform approach is appropriate in the varied conditions in which certifications are used. Also, transparency in an agency’s certification requirements does not guarantee that the process will result in the best (or even a good) decision. Conversely, lack of transparency does not necessarily mean that an agency’s certification decision will not be good or appropriate. At a minimum, however, the opportunities for alternative certification organizations or requirements to be put forward are improved if agencies are transparent in establishing their requirements and vetting their decisions with the public. Federal agencies’ certification requirements are an invaluable tool in helping to ensure product quality, process reliability, and professional competence in a variety of venues. Without those requirements, federal agencies would have to independently evaluate the safety of products, whether certain procedures will yield the desired results, and whether individual workers possess the skills required to perform a given task. Federal agencies have broad latitude in the selection of certification requirements and certifying organizations, which can result in what appear to be inconsistencies of application. For example, five agencies might each require a different certification for the same type of product or service. Businesses that want to provide that product or service to each of the agencies would therefore have to incur the expense associated with obtaining five certifications. Also, an agency can accept certifications from one certifying organization while not accepting certifications in the same subject area from other organizations with what appear to be similar qualifications. Organizations that are not selected would then have to forgo any income associated with providing certifications for that agency. These apparent inconsistencies are exacerbated when the reasons behind the agencies’ certification decisions are unclear. Transparency of these decisions could improve their perceived legitimacy, particularly when more than one certification option is available to an agency. The means by which agencies’ certification decisions can be made transparent will depend on the context in which the requirements are imposed. For example, if an agency’s certification requirement is part of a procurement action, the agency can make clear the basis of that requirement in the request for proposals. Some agencies have also used the rulemaking process to delineate the rationale behind their certification requirement decisions. However, although contracting and rulemaking processes are convenient mechanisms for certification transparency, they are not always available because some certification requirements do not arise in either environment. The extent to which agencies’ certification requirements need to be explained will also depend on the circumstances surrounding the certification requirement. For example, only a brief explanation should be necessary when an agency picks a certifying organization that is generally acknowledged to be the only such organization available. On the other hand, a more elaborate explanation may be necessary when an agency selects one organization over others with what appear to be similar qualifications. In that case, transparency can also help organizations not selected to understand what they must do to meet the agency’s requirements. The forthcoming guidance being developed by NIST for the Secretary of Commerce may help bring more uniformity to the certification process, thereby making that process more intelligible to contractors, regulated parties, and other entities affected by the requirements. However, NIST officials said that the draft guidance does not directly address the issue of certification transparency. Although it would probably be unwise to recommend a single transparency approach, the guidance could generally advocate the concept of transparency in agencies’ certification decisions and suggest alternative mechanisms by which those decisions could be explained to the public. We recommend that the Secretary of Commerce include a section in the conformity assessment guidance being developed that specifically addresses the transparency of agencies’ certification decisionmaking. Specifically, we believe that the guidance should encourage agencies to publicly explain why particular certification decisions were made or how certification decisions in the future will be made. The guidance should present alternative approaches for the agencies to consider in making their certification decisions more transparent, but it should not advocate that a single approach be used in all circumstances. We provided a draft of this report to the Secretaries of Commerce, Health and Human Services, Transportation, and Veterans Affairs and the Director of OMB for their review and comment. Officials from HHS, DOT, and OMB informed us that their agencies did not have comments on our draft report. VA did not provide comments. On September 13, 1999, the Secretary of Commerce provided written comments on the draft report. The Secretary said that the Department would address our recommendation on the issue of transparency in agencies’ certification decisionmaking during the public comment period for the conformity assessment guidance being prepared by NIST. The Secretary noted that NIST would work with ICSP on the most effective way to address the issues of transparency for both regulatory and procurement agencies. The Department of Commerce also provided some technical comments and suggestions, which we incorporated as appropriate. To ensure that we had accurately characterized the examples of agency certification programs and requirements presented in an appendix to this report, we also provided the relevant portions of our draft report to officials in the Departments of Agriculture, Housing and Urban Development, and Labor and the Environmental Protection Agency. They provided technical comments and suggestions, which we included in this report as appropriate. We are sending copies of this report to Representative Nydia M. Velazquez, Ranking Minority Member of the House Committee on Small Business. We are also sending copies to the Honorable William M. Daley, Secretary of Commerce; the Honorable Donna E. Shalala, Secretary of Health and Human Services; the Honorable Rodney E. Slater, Secretary of Transportation; the Honorable Togo D. West, Jr., Secretary of Veterans Affairs; and the Honorable Jacob Lew, Director of OMB. Copies will also be made available to others on request. Major contributors to this report are acknowledged in appendix II. If you have any questions about this report or would like to discuss it further, please contact me on (202) 512- 8676. This appendix briefly describes selected certification or certification- related programs and requirements. Although not intended to provide a compendium of all such federal agency programs or requirements, the appendix illustrates both the number of federal certification requirements and the dimensions by which they vary. To compile this appendix, we relied primarily on examples identified by officials within the agencies that we contacted during this review and information provided in the National Institute of Standards and Technology (NIST) Directory of Federal Government Certification and Related Programs. The U.S. Department of Agriculture’s (USDA) Agricultural Marketing Service (AMS) provides voluntary on-site grading and certification of meats and meat products through physical examination of product characteristics during the production process. The required tests are performed in government labs by AMS personnel, and approved USDA stamps and roller brands are applied to products that are considered in compliance with applicable standards or specifications. The grading system provides a common language to facilitate trading, and the certification assists large-scale buyers by providing impartial evaluation and certification that meat purchases meet their contract specifications. An AMS official also pointed out two related services provided under the agency’s regulations. The Contract Verification Service provides wholesale buyers of noncertified commodity products a method of determining whether procurements meet contractually specified requirements. The Quality Systems Certification Program provides meat packers, processors, producers, or other businesses in the livestock and meat trade the ability to have special processes or documented quality management systems verified. USDA’s AMS also issues certificates regarding the quality of other agricultural products, including fresh fruits, vegetables, nuts, and related products. All of these certifications are voluntary, except for commodities that are regulated for quality by a marketing order or marketing agreement, or that are subject to import or export requirements. AMS also issues grade certificates for raw cotton, which are mandatory for cotton delivered on futures contracts. To assist in the export of plants and unprocessed plant products, USDA’s Animal and Plant Health Inspection Service (APHIS) issues phytosanitary (plant health) certificates to exporters certifying conformity with the receiving country’s plant quarantine import regulations. The inspections are conducted by federal and state cooperators, and testing is done in federal and recognized state and university labs. APHIS also provides export certificates, stamp endorsements, or letterhead certification to indicate the class, quality, and condition of animal by-products to assist exporters in the United States to comply with import requirements in foreign countries. The National Marine Fisheries Service (NMFS) within the Department of Commerce’s National Oceanic and Atmospheric Administration (NOAA) inspects seafood products and processing operations on a voluntary, fee- for-service basis. The inspections are performed by licensed federal and state agents and involve vessel and plant sanitation, product inspection, grading, certification, label review, and laboratory analysis. Federal, state, and federally recognized private laboratories perform testing and analysis, and NMFS lists approved suppliers and graded/certified products. Other federal and state agencies, private organizations, foreign government agencies, and international organizations recognize the NMFS seafood certifications. The National Weather Service (NWS) within NOAA administers a mandatory program to certify weather observers and approve weather stations. NWS certifies weather observers by examination and experience for acceptable vision, adequate training, and demonstrated ability to take and record accurate and timely weather observations. The stations are approved on the basis of appropriate instrumentation use, installation of automated sensors, maintenance programs, and certification of the observers. The program ensures consistent, minimum performance expectations for manual weather observations used for the preparation of forecasts and warnings and the support of aviation operations. The National Institute of Standards and Technology (NIST) administers the National Voluntary Laboratory Accreditation Program (NVLAP). NVLAP accredits laboratories on the basis of an evaluation of their technical qualifications and competence to carry out specific calibrations or tests. NVLAP accreditation is available to commercial; manufacturers’ in-house; university; and federal, state, and local government laboratories and is formalized through issuance of a Certificate of Accreditation and Scope of Accreditation. NIST also administers the National Voluntary Conformity Assessment System Evaluation (NVCASE) program. The program’s primary objectives are to provide a basis for the United States government to assure foreign governments that qualifying conformity assessment bodies in the United States (e.g., accreditors of laboratories) are competent to satisfy their regulatory requirements and to facilitate the acceptance of American products in foreign markets. The NVCASE program can also be applied in support of domestic regulatory programs at the request of another federal agency. The NVCASE program includes activities related to laboratory testing, product certification, and quality system registration. After NVCASE evaluation, NIST provides recognition to qualified organizations in the United States that effectively demonstrate conformance with established criteria. NIST maintains listings of all recognized bodies, as well as listings of qualified bodies that are currently accredited by bodies recognized by NIST. In various procurement documents prepared by the Centers for Disease Control and Prevention (CDC), the agency included certification requirements for persons providing specific services. For example, a medical officer had to be board-certified in one relevant primary care area (such as internal medicine or surgery) and licensed to practice in the United States or be board-eligible in both occupational medicine and internal medicine. A physician’s assistant had to be certified or licensed as a physician assistant by the appropriate national or state recognized organization. A chief nurse, in addition to being registered as a registered nurse in the state of Georgia, had to have annual certification in basic cardiac life support. CDC also required that cardiopulmonary resuscitation (CPR) instructors have American Heart Association (AHA) certification as providers of basic and advanced cardiac life support or American Red Cross Association certification plus AHA certification as (1) Instructor in Basic Cardiac Life Support, (2) Instructor-Trainer in Basic Cardiac Life Support, and (3) Instructor in Advanced Cardiac Life Support. The National Shellfish Sanitation Program (NSSP) is a federal-state cooperative program recognized by the Food and Drug Administration (FDA) and the Interstate Shellfish Sanitation Conference for the sanitary control of shellfish (oysters, clams, mussels, and scallops) produced and sold for human consumption. Most of the regulation, inspection, investigations, and control measures are done at the state level. However, FDA conducts an annual review of each state shellfish control program to determine its degree of conformity with the NSSP. Annually, the state Shellfish Sanitation Control Authority (SSCA) issues number certificates to shellfish dealers who comply with sanitary standards and forwards copies of the interstate certificates to FDA. FDA publishes a monthly list of all shellfish shippers that have been certified by states that maintained satisfactory control programs. Shellfish plants certified by SSCA are required to place their certificate numbers on each container or package of shellfish shipped. Separate from NSSP, FDA also issues certificates for other fish and fishery products, including Certificates of Free Sale, Certificates of Export, Certificates to Foreign Governments, and European Union Health Certificates for Fishery Products. FDA also has a mandatory certification program that lists approved color additives and the conditions under which they may be safely used in foods, drugs, cosmetics, and medical devices. Each batch of color must be tested and certified in an FDA laboratory before it can be used, unless the color additive is specifically exempted by regulation. Other federal agencies, state agencies, and private sector organizations recognize FDA’s certifications. However, under the provisions of the Federal Food, Drug, and Cosmetic Act, FDA cannot accept certification of a color by a foreign country as a substitute for its own certification. FDA’s Center for Devices and Radiological Health is responsible for the setting and enforcement of performance standards to control radiation emissions from electronic products, such as television receivers, microwave ovens, X-ray equipment, and lasers. A manufacturer of an electronic product for which there is an applicable federal performance standard is required to affix a certification label stating that the product conforms to the standard. Certification is based on a test prescribed by the standard or a testing program that is in accord with good manufacturing practices as determined by the Center. Manufacturers’ or third-party laboratories perform the testing. Under the Mammography Quality Standards Act of 1992, FDA was authorized to implement the act’s requirements for the certification and inspection of all mammography facilities. Only certified facilities that are in compliance with uniform federal standards for safe, high quality mammography services may lawfully operate. These requirements apply to all facilities producing, processing, or initially interpreting mammograms, whether for screening or diagnostic purposes, except for facilities of the Department of Veterans Affairs, which developed its own quality assurance program. To become certified, facilities must first be accredited by an FDA-approved accreditation body. FDA published regulations to establish the requirements and standards for accrediting bodies and application procedures for such bodies. The FDA regulations also established the quality standards for mammography facilities and procedures for facility certification. Accreditation and certification must be renewed every 3 years. The Health Care Financing Administration (HCFA) regulates all laboratory testing (except research) performed on humans in the United States through the Clinical Laboratory Improvement Amendments (CLIA) program. CLIA certification is mandatory for all facilities that perform laboratory testing on specimens derived from the human body for the purpose of providing information for the diagnosis, prevention, or treatment of disease, or impairment of or assessment of health. CLIA regulations are based on the complexity of the test method—the more complicated the test, the more stringent the requirements. Upon determining compliance with regulatory requirements, HCFA issues the appropriate certificate(s) for the type(s) of testing the laboratory performs. Those certificates are effective for a 2-year period. Those laboratories that must be surveyed routinely (those performing moderate- or high-complexity testing) can choose whether to be surveyed by HCFA or by a private accrediting organization. Approved accrediting organizations under CLIA include the American Association for Blood Banks, the American Osteopathic Association, the American Society for Histocompatibility and Immunogenetics, the College of American Pathologists, the Commission of Laboratory Accreditation, and the Joint Commission on Accreditation of Healthcare Organizations. In addition, certain laboratories are licensed under CLIA-exempt state programs in New York, Oregon, and Washington. HCFA also has a survey and certification program covering providers and suppliers of health care services to Medicare and Medicaid beneficiaries. The aim of HCFA’s program is to ensure that these providers (such as participating hospitals, home health agencies, and nursing home providers) meet federal health, safety, and program standards. Hospitals accredited by the Joint Commission on Accreditation of Healthcare Organizations or the American Osteopathic Association are deemed to participate in the program and meet federal requirements. The Department of Housing and Urban Development (HUD) has a voluntary program for validation of private sector certifications of building products for construction (i.e., that building products comply with designated standards). After testing by government accredited, third-party validating, state/local, or manufacturers’ laboratories and inspection by third parties, products that meet standards must have an authorized mark or label affixed by the manufacturer or a third-party administrator. Currently, 33 third-party administrators participate in the HUD Building Products Certification program for such products as solid fuel-type heaters, fireplace stoves, plastic bathtub units, aluminum windows, storm doors, wood window units, carpet, and lumber, among others. HUD also has a mandatory program requiring third-party certification of manufactured housing designs and quality assurance manuals, as well as in-plant inspection to ensure compliance with standards. HUD issues lists of approved third-party agencies. The Occupational Safety and Health Administration (OSHA) implements the Nationally Recognized Testing Laboratory (NRTL) Program. This program recognizes private sector organizations (third-party laboratories) that meet the necessary qualifications specified in program regulations as NRTLs. An NRTL determines that specific equipment and materials meet consensus-based standards of safety to provide the assurance, required by OSHA, that these products are safe for use in United States workplaces. To obtain initial NRTL recognition, an applicant must complete an application and resolve any deficiencies found during an on-site assessment. A preliminary notice is then published in the Federal Register announcing the application for recognition, a 60-day comment period ensues, and (absent compelling reasons to the contrary), a final notice is published formally recognizing the applicant as an NRTL. In another program, OSHA also accredits independent third-party certification agencies for the purpose of certifying maritime vessels’ cargo gear lifting and handling gear and shore-based cargo handling equipment. OSHA maintains a list of accredited certification agencies and surveyors. The certifications are intended to ensure that all covered equipment is in a safe material condition, properly tested, and in compliance with regulatory requirements. Through this program, the United States fulfills its responsibilities under International Labor Organization (ILO) Convention No. 152. Regulations on substance abuse prevention that cover employees in Department of Transportation (DOT)-regulated transportation industries (including aviation, highway, rail, and other transit industries, such as pipelines) include provisions requiring face-to-face evaluation by substance abuse professionals. DOT defines these professionals as including, among others, a licensed or certified psychologist or an addiction counselor certified by the National Association of Alcoholism and Drug Abuse Counselors Certification Commission or by the International Certification Reciprocity Consortium/Alcohol & Other Drug Abuse. Testing and/or inspection by a Coast Guard accredited laboratory is mandatory for some equipment required for use on recreational boats and commercial vessels (e.g., equipment used for lifesaving, fire protection, and pollution prevention, as well as other electrical and engineering equipment). Manufacturer self-certification is allowed for selected items. The Coast Guard issues certificates of approval to certain providers of merchant marine courses. Obtaining a certificate is mandatory where required by regulations (in areas such as radar observation, fire fighting, and first aid), but it is otherwise voluntary. Training organizations seeking approval must submit course packages to the Coast Guard’s National Maritime Center, the proposed training facility is inspected by a Coast Guard Regional Examination Center, and instructor qualifications are reviewed. The Coast Guard issues certificates of inspection for certain vessels following satisfactory completion of an inspection by a government body or an organization recognized by the Coast Guard. A 1997 final rule provided for the alternative to Coast Guard inspection by permitting the Coast Guard to issue a certificate of inspection based upon reports by a “recognized, authorized classification society.” The Coast Guard generally establishes the procedures and standards used in inspections, but some are also established by statute or through international conventions and treaties for certain vessels. Other federal agencies, foreign government agencies, and international organizations recognize these certificates. The Coast Guard also enforces requirements to ensure the safety of shipping containers used for the international transport of cargo. A third party must certify these containers before they enter into international traffic. The certifications are mandatory under the International Safe Container Act and signify that the containers conform to the International Convention for Safe Containers. Foreign governments and international organizations recognize this certification. Containers must display a Safety Approval Plate from the approval authority in the country of registry. The Federal Aviation Administration (FAA) has a comprehensive system of certifications for the civil aviation system, with coverage ranging across equipment, personnel, and facilities. For example, FAA issues Type Certificates for makes and models of aircraft, aircraft engines, or propellers and grants Airworthiness Certificates for specific aircraft that meet approved type designs and are in condition for safe operation. FAA provides certification for pilots, flight instructors, crew members, mechanics, control tower operators, and other aviation-related personnel. FAA also provides for certification of repair stations, parachute lofts, and schools for pilots and mechanics. FAA operates an Airport Safety and Certification Program and an Airport Lighting Equipment Certification program. It issues certificates of designation and certificates of authority to, among others, aviation medical examiners, examiners of pilots and technical personnel, designated engineering representatives, and manufacturing inspection representatives. Compliance with the FAA certification system is mandatory for civil aviation, and the Department of Defense and the Coast Guard also require that some of their aircraft and equipment be FAA-certified. Most of the applicable design, performance, and quality requirements are specified in the Code of Federal Regulations. The International Civil Aviation Organization also sets general guidelines for airworthiness certification systems that FAA implements in the United States. In addition, FAA accepts some nongovernment standards, such as ones developed by the Society of Automotive Engineers, the Radio Technical Commission for Aeronautics, and the Aerospace Industries Association. The Federal Railroad Administration has a number of safety-related certification programs. One mandatory program covers safety glazing of windows for locomotives, passenger cars, and cabooses. Testing of glazing materials to demonstrate compliance with regulatory requirements is done by either manufacturers in their labs or independent labs that meet specified qualifications. Each individual unit of glazing material is permanently marked to indicate certification. A voluntary program administered by DOT’s Research and Special Programs Administration (RSPA) covers packaging of hazardous materials for export. Shippers and container manufacturers can demonstrate conformance of their packaging designs with United Nations’ standards through third-party testing agencies designated by RSPA’s Office of Hazardous Materials Technology. These third-party approval agencies evaluate and issue approval certificates for intermodal portable tanks and certificates of conformance for other types of packaging. RSPA also has a mandatory requirement for third-party certification of railway tank cars used for the transport of hazardous materials. The third parties must be acceptable to the Association of American Railroads (AAR) and the Bureau of Explosives. AAR provides design approval of couplers, which is accepted by DOT. RSPA issues certificates of construction. Another RSPA mandatory program requires registration of all persons or organizations engaged in the manufacture, assembly, inspection and testing, certification, or repair of cargo tanks or cargo tank motor vehicles. Manufacturers of special cargo tanks and cargo tank motor vehicles must also obtain an American Society of Mechanical Engineers (ASME) Certificate of Authorization for the use of the ASME “U” stamp. Repairs that are not verified to the ASME Code must have a National Board or ASME Certificate of Authorization. ASME or ASME-designated bodies perform the required testing or inspection. Other federal agencies, state agencies, private sector organizations, and the Canadian government recognize reciprocity with this registration. RSPA requires third-party certification of welders and plastic pipe assemblers to ensure the safety of pipelines for gas and hazardous liquids. The agency also requires manufacturers’ self-certification for valves, pressure-limiting services, and overall installation to specified standards. Certification of welders is usually conducted by the American Welding Society, but a comparable program by the installing contractor may be acceptable to RSPA. The agency adopts the standards of national standards organizations. The Department of Veterans Affairs (VA) requires certification of automotive driving aids and automatic wheelchair lifts for purchases funded by the Department. VA publishes a compliance list that delineates certified suppliers of wheelchair lift systems and hand controls (driving aids). Certification is by a VA-sponsored Automobile Adaptive Equipment Committee. Government testing and inspection, third party government- approved certification (Society of Automotive Engineers), and manufacturers’ self-certification are used to ensure compliance with VA’s standards. VA also accepts certification by other agencies when current standards are applied. VA has similar mandatory requirements for self-propelled and motorized wheelchair purchases funded by the department, again listing suppliers of these products. Certification is by a VA-sponsored Prosthetic Technology Equipment Committee. Government testing and inspection, third party government-approved certification (Rehabilitation Engineering and Assistive Technology Society of North America/ANSI), and manufacturers’ self-certification are used to ensure compliance with VA’s standards. VA also accepts certification by other agencies when current standards are applied. In order to ensure standardization and uniformity in laboratory test performance throughout the VA system’s clinical, nuclear medicine, and special purpose ancillary testing laboratories, the department requires third-party certification by the College of American Pathologists (CAP) and the Joint Commission on Accreditation of Healthcare Organizations. The standards applied are those of the CAP Laboratory Accreditation Program, but VA also recognizes certification by the Joint Council of American Hospitals. VA established board certification requirements that, with some exceptions, applied to physicians hired on or after July 1, 1997. Unless they have written approval of the Chief Patient Care Services Officer prior to appointment, these physicians must be board-certified in the clinical specialty area in which they will practice. VA’s Undersecretary for Health specified that the certifying bodies for these purposes are the American Board of Medical Specialties (ABMS) for allopathic physicians and the Bureau of Osteopathic Specialists (BOS) for osteopathic physicians. An Environmental Protection Agency (EPA)-accredited, third-party laboratory must conduct emissions testing for certification of new residential wood heaters and submit the results to EPA. EPA certifies a representative wood heater from the model line, granting certificates valid for 5 years. In another mandatory program, laboratories performing drinking water analysis to demonstrate compliance with regulations must be certified as capable of delivering acceptable performance. States seeking to operate a drinking water regulatory program must implement a laboratory certification program based on federal standards. EPA’s regional offices serve as the certifiers in situations where there is no approved state program. Certified laboratories are issued certificates identifying areas of competency. To ensure that pesticides posing relatively high risk, or that are difficult to use, are used only by or under the direct supervision of competent persons, EPA oversees state programs to certify applicators. EPA serves as the certifier of applicators in Colorado. An applicator may not apply restricted-use pesticides until he or she demonstrates competency and receives certification. Under section 609 of the Clean Air Act Amendments of 1990, operators who service motor vehicle air conditioners must be certified under an approved 609 program prior to offering services. EPA restricts the sale of small containers of Class I and Class II substances appropriate for use in motor vehicle air conditioners to certified personnel. Personnel testing is done by private industry programs approved by EPA. Also, recovery and recycling equipment must be approved by EPA and must meet the requirements of the SAE standards for approval. EPA maintains a list of technician certification programs and approved equipment. In addition to those named above, Alan Belkin, John Brosnan, and Victor B. Goddard made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed federal agencies' certification requirements for goods and services, focusing on: (1) the extent and variety of certification activities in the federal government; (2) the extent to which there are policies, procedures, or guidance governing those activities, either governmentwide or within selected agencies; and (3) an agency certification procedure that could serve as an example or best practice for other agencies. GAO noted that: (1) federal agencies engage in a large number and wide variety of certification-related activities; (2) the National Institute of Standards and Technology (NIST) publishes directories listing more than 200 federal government procurement and regulatory programs in which agencies provide or require certification, accreditation, listing, or registration; (3) these directories provide only a partial inventory of agencies' activities because they focus primarily on certifications of products and services and they do not cover individual procurement actions in which agencies require particular certifications; (4) certification activities also vary across multiple dimensions, including the origin of the requirements, their targets, which entities do the certifying, whether the certifications are mandatory or voluntary, and the extent to which there is reciprocity with or recognition of other certifications or requirements; (5) specific guidance regarding the selection of specific requirements or certifying organizations is limited; (6) federal procurement law imposes some limits on agencies' use of certification requirements, restricting the use of certification requirements to instances in which the requirements are specifically imposed by law or the agencies show a particular need and, if possible, allow for alternatives; (7) some agencies have established certification procedures and criteria for individual programs, and agency officials identified some related policies, procedures and guidance that can affect their certification activities; (8) there is no governmentwide guidance, or agencywide guidance in the five agencies that GAO reviewed, regarding all types of certification requirements; (9) NIST has prepared draft guidance on conformity assessment activities, including certification, which it plans to issue for public comment; (10) one best practice that GAO has supported in the regulatory arena, transparency of decisionmaking, also appears applicable to certification requirements, particularly given the complexity and diversity of certification activities and organizations; (11) in the certification actions that GAO examined, the criteria that the agencies used to establish a particular requirement or select a particular certifying organization were very clear in some instances but not clear in others; (12) other agencies' certification actions were not as transparent and certification bodies that were not selected raised questions about the criteria that agencies used; and (13) in each of those cases, agency officials were able to provide the rationale for their actions.
DOD program managers obtain technical data and technical-data rights to enable the department to acquire and sustain weapon systems at the lowest cost, to provide flexibility in future acquisition and sustainment of systems and subsystems, and to maintain those systems. DOD may obtain different levels of rights to technical data including unlimited rights, government-purpose rights, and limited rights. If DOD obtains unlimited rights to technical data, it may provide the data to anyone for any reason. However, if DOD obtains government-purpose rights, it may provide the data to third-party contractors only for activities in which the U.S. government is a party, including competitive reprocurement, but not including commercial purposes. Further, if DOD obtains limited rights, it may only use the data internally and may provide the data to third parties in a limited number of circumstances (e.g., emergency repair and overhaul.) Moreover, DOD and contractor maintenance personnel need technical data and technical-data rights in order to maintain, repair, and upgrade weapon systems throughout the life cycle of the systems. The process that DOD program officials follow to acquire technical data and technical-data rights for systems includes four general phases with multiple steps in each phase. In this report, we evaluated aspects of the first phase of this process (see fig. 1). Progrm offici pecify technicl d requirement in olicittionssued to contrctor. Contrctor’ proposa assert ny retriction on DOD’ right to technicl d needed to prodce tem. Progrm offici review nd evuate proposa, identify reas of disagreement, nd my chllenge contrctor’ assertion. When contrctor prodce the tem, they massert ome dditionl retriction to technicl d right, which DOD my chllenge. Contrctorrk ll d they deliver to DOD with the pproprite level of right, nd DOD review nd evuate thee mrk for contency with DOD policie nd greement in the contrct. DOD my relize if it has quired the needed d nd right when it susin it tem. DOD us nd right to mintin, repir, nd olicit for susinment contrct for it tem. DOD my chllenge d rightrking within 3 ye of contrct completion. DOD mo exercie option for dditionl right nd d tht it did not initilly quire if thi option i provided for in the contrct. Requirements, strategies, and plans phase: Program officials assess the long-term technical data and technical-data rights requirements for their system and then document those requirements in an acquisition strategy and an acquisition plan for their system. To assess a system’s technical-data requirements, program officials determine which components DOD will need technical data for and the level of rights to seek for those data. Program officials consider several factors in their assessment, such as the government’s cost for the rights to the data, sustainment plans, re-procurement needs, and contractors’ economic interest. Once program officials complete their assessment, they record the technical-data requirements in a data-management strategy that is included in the acquisition strategy, a document that is required by DOD Instruction 5000.02. They also include similar documentation in the acquisition plan, which is required by the DFARS. The acquisition strategy describes the overall approach for managing and planning for the program, while the acquisition plan describes the program’s contracting approach. The program manager then submits these documents to senior department officials to review and approve at certain major milestones in the defense acquisition process. Contracting phase: Program officials specify the approved technical-data requirements in solicitations they issue to contractors. These solicitations describe the capability requirements for a system that the government intends to acquire. Contractors then submit proposals to DOD in which they describe the system that they would build to provide the required capability. In the proposals, contractors also discuss technical-data issues. For example, if a contractor desires to assert restrictions on DOD’s ability to use any of the technical data needed to manufacture or sustain the system, the contractor asserts those restrictions in its proposal. Program officials then review and evaluate the contractors’ proposals using criteria included in the solicitation. Officials evaluate any asserted restrictions on DOD’s use of technical data to identify areas of disagreement that the department should resolve through negotiations or other procedures in accordance with applicable law. DOD officials then award a contract. Performance and delivery phase: During this phase, the selected contractor begins producing the system and may assert additional restrictions to technical-data rights in certain circumstances. For example, the contractor may assert new restrictions if the department modifies its system requirements or if the contractor inadvertently omitted a restriction during the contracting phase. DOD officials may also challenge these additional asserted restrictions. Contractors mark all technical data they deliver to the government with a level of rights (e.g., government purpose or limited rights). In addition, program officials review these markings to ensure that the contractor has identified them in a manner that is consistent with DOD policies and the agreement in the contract. Post-performance and sustainment phase: In this phase, the contractor has delivered a system to DOD. DOD officials may realize during post-performance and sustainment whether they have acquired the necessary technical data and technical-data rights during the sustainment phase. When sustaining systems, DOD personnel may use technical data for critical functions including maintaining and repairing systems. Any new technical data and technical-data rights that would be needed for any support contracts during sustainment phase would need to be acquired. Program officials also may challenge the level of rights that the contractor asserted for any delivered technical data that is used to produce the system for up to 3 years after final payment under the contract or three years after delivery of the data, whichever is later. Program officials may also exercise options to obtain additional rights and data that the department did not acquire during the performance and delivery phase if DOD and the contractor had included a provision in the contract called a “priced-contract option.” For nearly a decade, we and the military-service audit agencies have conducted reviews that included information on DOD’s acquisition of technical data and technical-data rights for systems in the acquisition process. In February 2002, we reported that DOD officials expressed concern that they did not have affordable technical data to develop additional or new sources of repair and maintenance to ensure a competitive market. Subsequently, we reported in August 2004 that DOD program managers often opt to spend limited acquisition dollars on increased weapon system capability rather than on acquiring the rights to the technical data—thus limiting their flexibility to perform maintenance work in house or to support alternate source development should contractual arrangements fail. We subsequently reported in July 2006 that the Army and the Air Force encountered limitations in their sustainment options for some fielded-weapon systems because they lacked technical- data rights. More recently, we reported in 2010 that the government’s lack of access to proprietary technical data, among other things, limits—or even precludes the possibility of—competition for DOD weapons programs. Additionally, the Air Force and Army audit agencies have reported on issues related to the acquisition of technical data and technical-data rights. For example, in May 2009, the Air Force Audit Agency reported that Air Force program officials had not effectively implemented OSD and Air Force initiatives to improve the management and acquisition of technical - data rights and had not satisfied technical-data assessment requirements. Similarly, the Army Audit Agency reported in July 2009 that (1) Army policies on technical-data assessments and documentation were not incorporated into Army regulations and (2) the Army acquisition workforce had not received training on assessing and managing technical data and technical-data rights requirements and as a result did not consistently address technical data and technical-data rights requirements. We provide more detail in appendix II about the recommendations in these audit agency reports and the services’ responses. DOD updated its acquisition and procurement policies, in a manner that reflects a 2007 legislative provision and our 2006 recommendations, to require that acquisition program managers document their long-term technical-data needs. According to DOD officials, these policy updates do not change the requirements program managers must follow that to decide what technical data or technical-data rights to acquire for their systems. Section 802 of the 2007 National Defense Authorization Act required the Secretary of Defense to direct program managers for major weapon systems—and subsystems of major weapon systems—to assess the long- term technical-data needs of their systems and establish strategies providing for the technical-data rights needed to sustain the systems over their life cycles. The 2007 act required, among other things, that the strategies developed in accordance with the section address: the merits of a priced contract option for the future delivery of technical data that were not acquired upon initial contract award, and the potential for changes in the sustainment plan over the life cycle of the system. We had previously recommended that DOD establish these requirements for program managers in our July 2006 report. We recommended these actions after finding that a lack of technical-data rights limited the flexibilities of the Army and Air Force to make changes to sustainment plans for some fielded weapon systems. We also found that delaying action in acquiring technical-data rights can make these data cost-prohibitive or difficult to obtain later in a weapon system’s life cycle. DOD took a series of actions to change its acquisition and procurement policies in a manner that reflects the language of the 2007 act and our 2006 recommendations. As a result of these actions, program managers are now required to record their long-term technical-data needs in two key acquisition program documents: the acquisition strategy and acquisition plan. Initially, OSD issued a memorandum in July 2007 requiring program managers for systems in the two highest-value acquisition categories (ACAT I and II) to assess the long-term technical-data needs for their systems and document a corresponding strategy for technical data in each program’s acquisition strategy. DOD later included this policy change in the December 2008 update of its acquisition policy, DOD Instruction 5000.02. In a separate action, DOD issued an interim rule in September 2007 amending the DFARS. This rule also requires program managers to assess the long-term technical-data needs for their systems and document a corresponding strategy in each program’s acquisition plan. DOD finalized the interim rule in December 2009. Together these policy changes required that strategies and plans for major acquisition programs: 1. assess the data required to design, manufacture, and sustain the system as well as to support re-competition for production, sustainment, or upgrade; 2. address the merits of including a priced contract option for future delivery of data not initially acquired; 3. consider the contractor’s responsibility to verify any assertion of restricted use and release of data; and 4. address the potential for changes in the sustainment plan over the life cycle of the weapon system or subsystem. OSD officials told us that these policy updates do not change the requirements program managers must follow to decide what technical data or technical-data rights to acquire for their systems. They also told us that the only new requirement was that program managers include documentation of their system’s long-term technical-data needs in acquisition strategies and acquisition plans. Moreover, OSD and each military department have issued guides for program managers that elaborate on the requirements in DOD policy assessing long-term technical-data needs and the updated requirement to document those needs in acquisition strategies and acquisition plans. We discuss these guides in more detail later in this report. The documentation we reviewed for 12 acquisition programs partially addressed the revised DOD policies on long-term technical-data needs. We evaluated these programs’ acquisition strategies and acquisition plans against four criteria identified in the revised technical-data policies (described earlier in more detail). These policies require programs to document (1) an assessment of technical-data requirements, (2) the merits of a priced-contract option, (3) the contractor’s responsibility to verify assertions of limited data rights, and (4) the potential for changes in the system’s sustainment plan. We examined program acquisition strategies for the first three requirements. We reviewed program acquisition plans for the fourth requirement because the requirement was not included in the revised acquisition policy that governs acquisition strategies but was included in the procurement-policy update, which governs acquisition plans. As a part of our review, we did not consider the amount or level of quality of the information that the acquisition strategies and acquisition plans included in response to each requirement because DOD’s policies did not specify the minimum levels or types of information that program officials are required to include to satisfy each of the four requirements. Programs in our sample included varying amounts of information in response to each requirement they addressed. For example, one acquisition strategy contained a 95-page appendix on technical-data management while another contained three paragraphs focusing on technical data. If a strategy or plan included any discussion of a requirement, we determined that the strategy or plan addressed that requirement, regardless of the level of detail. Figure 2 summarizes the results of our analysis and shows that 10 of the 12 programs that we evaluated addressed at least one of the 4 requirements in their documentation, and 4 addressed as many as 3 requirements. However, none of the programs addressed all four of the requirements in its documentation, and two did not address any of the requirements. Assessments of technical-data requirements: Nine of the 12 acquisition strategies documented an assessment of the data required to design, manufacture, and sustain the system as well as support re- competition for production, sustainment, or upgrade of the system, for example: The Integrated Air and Missile Defense strategy included an appendix that, among other things, stated that the program office would require delivery of sufficient data to completely describe and define the functional and physical characteristics of the system for manufacturing, and it also provided a list of required types of data. The strategy for the Navy Multiband Terminal stated that the program manager had “assessed the long-term technical-data needs” of the system and “established acquisition strategies that provide for technical data” and “associated license rights needed to sustain [the systems] over their life cycle and allow for competitive procurement of future terminals.” The three strategies that did not address the requirement did not identify any required data. Merits of a priced-contract option: Four of the 12 acquisition strategies discussed the merits of a priced contract option—an option to obtain additional data and rights that the program did not acquire during the contracting phase, for example: The Small Diameter Bomb II strategy stated that the contract “will contain a priced contract option…for a one-time delivery of a technical-data package” that would consist of data “that describes the design, support, test, and maintenance” of the system, and the models, simulation and analysis used to predict its performance. The strategy for the Joint High Speed Vessel stated that due to “the non-developmental nature of the program, a priced option…was not considered a cost-effective use of government funds.” The eight other strategies did not discuss the merits of a priced contract option for technical data. Contractor’s responsibility to verify data assertions: Three of the 12 acquisition strategies referred to the contractor’s responsibility to verify any assertion that the contractor made to restrict the government’s use and release of any technical data. Each of the three strategies noted that the program planned to include a clause in its contracts that identifies the contractor’s responsibility to provide sufficient information to the government’s contracting officers to enable them to evaluate the contractor’s assertions. While nine strategies did not discuss the contractor’s responsibility to verify assertions of restricted use and release of technical data or mention the contract clause, a number of these strategies discussed the program office’s efforts or responsibility to verify contractor assertions of restricted use and release of data. For example, the B-1 Bomber Radar Reliability and Maintainability strategy discussed the program office’s efforts to verify the contractor’s assertion of restricted use and release of data. Potential for sustainment changes: Four acquisition plans addressed the potential for changes in the system’s sustainment plan over its life cycle, and the acquisition plans for two other programs were not subject to this requirement, for example: The Joint High Speed Vessel acquisition plan stated that the “potential for changes in the sustainment plan is small.” Two of the 12 programs in our sample were not subject to this requirement. The requirement did not apply to the Joint Battle Command-Platform and Navy Multiband Terminal because both programs developed acquisition plans prior to the September 2007 procurement policy change on technical data and neither was required to update its plan. Addressing the potential for changes in the system’s sustainment plan over its life cycle is required for acquisition plans developed or updated after DOD’s 2007 revision to its procurement policy. The six acquisition plans that did not address this requirement did not discuss the potential for future changes in the sustainment plan as they relate to technical-data needs. Later in the report, we note that (1) a cause for the partially addressed documentation is ambiguity in DOD’s revised policies and (2) this ambiguity results in limits to department decision makers’ ability to exercise effective internal control in their reviews of acquisition documentation, which may result in delays in the acquisition process. Because these issues are related to a similar ambiguity in another technical-data policy, we provide a more detailed discussion of the causes and effects for both types of problematic outcomes later in this report. In the next section of our report, we describe OSD and military department guides that discuss additional voluntary steps the program managers may take for conducting and documenting assessments of long-term technical- data needs. These guides may result in acquisition documentation that is more responsive to DOD’s revised policies. However, most of the guides we describe were issued after most of the acquisition documentation we reviewed was approved. OSD and each military department have issued several guides for program managers that elaborate on the requirements in DOD policy for conducting and documenting assessments of long-term technical-data needs. From December 2009 through December 2010, DOD and the military departments issued guides covering voluntary actions that program managers might take to improve their decisions related to technical data. While officials in DOD and the military departments told us that program officials have found the various DOD-wide and military department- specific guides useful, program managers are not required to follow any of the recommendations contained in the guides. In December 2009, OSD updated the Web-based Defense Acquisition Guidebook to elaborate on the new requirements for program managers to document the long-term technical-data needs for their systems. The DOD- wide guidebook now includes topics that OSD recommends that program managers discuss in their acquisition strategy documenting the system’s long-term technical-data needs. For example, the guidebook recommends that for data acquired to support competition, the program manager document the (1) logic applied to select the technical data and technical- data rights, (2) alternative solutions considered, and (3) criteria used to decide what, if any, data to procure. Subsequent to the changes in the DOD-wide guidebook, the military departments provided their own additional guides. The Air Force Program Management and Acquisition Excellence Office in December 2010 issued an update to a guide for program managers that includes recommended steps to follow when determining a system’s long-term technical-data needs and documenting those needs in a data-management strategy. For example, the guide suggests that program managers consider whether Air Force depot officials agree that the technical data and technical-data rights that the program intends to acquire for the system are sufficient to enable depot-level maintenance. Later, in October 2010, the Air Force’s Product Data Acquisition Team launched a technical-data-focused Web site that includes some of the same information contained in the earlier Air Force guide and additional information. For example, the Web site asks program managers if the technical-data rights that program managers intend to acquire enable the Air Force to support competition for contracts for spare parts, equipment to upgrade to a system, and logistics support. The Army’s Product Data and Engineering Working Group in August 2010 published a 68-page guide that describes steps it recommends program officials take to assess a system’s long-term technical-data needs and document those needs in a data-management strategy. The Army’s guide contains a work sheet that provides program managers with a systematic approach to assess their technical-data needs. For example, for each component of a system, the worksheet prompts program managers to consider the (1) level of rights required, (2) expected levels of rights the Army will acquire in negotiations with a manufacturer, (3) any gaps between the requirements and expected negotiated outcomes, (4) plans to close any gaps, and (5) risks associated with those plans. The Navy in June 2010 published a set of guidelines that it recommends program managers follow when they determine their systems’ technical- data rights. The Naval Open Architecture Enterprise Team included these guidelines in an appendix to a contracting guidebook. Like the Air Force and Army resources, the appendix lists questions that the team recommends program managers consider when conducting a technical- data rights assessment. For example, the appendix asks whether the government will obtain government-purpose rights at a minimum for a system and asks for a justification for agreeing to more restrictive rights than government purpose rights. In addition to these department-level guides, some subordinate commands within military departments have issued guidance on technical-data assessments. For example, Air Force Materiel Command issued a handbook on technical-data rights in May 2010, while the Air Force’s Space and Missile Systems Center issued the third edition of a similar guide in January 2011. By issuing their own guidance, these subordinate commands are able to focus on issues of technical data particular to the command in question. While OSD and each of the military departments took actions to help program managers prepare technical-data assessments, DOD has not clarified ambiguities in the required technical-data policies to ensure their full implementation. Specifically, DOD has not clarified how program offices should address the requirement for documenting technical-data assessments, and has not clarified a recent requirement to conduct a business-case analysis on technical-data needs. Without internal controls such as clear instructions on how to respond to these policies, DOD and the military departments risk incomplete and inconsistent actions and documentation in response to the technical-data requirements. According to standards for internal control, implementing effective internal controls is a key factor that helps organizations ensure that management’s directives are carried out. Examples of internal control actions that management can take include issuing policies or instructions that enforce management’s directives. We found that the revisions to DOD’s acquisition and procurement policies, which require acquisition program managers to document their long-term technical-data needs, are unclear. For example, the revised DOD Instruction 5000.02 requires program managers to document an assessment of long-term technical-data requirements for their systems. However, the policy does not clearly state the level of detail program managers are required to document, or the extent to which they should document their reasoning for acquiring or not acquiring technical data and technical-data rights. Likewise, the DFARS requires programs to address in program documentation the potential for changes in the sustainment plan over the system’s life. However, the policy does not make clear what information DOD expects to be provided in documentation of possible future changes to a system’s sustainment plan (for example, underlying assumptions), and how this information should relate to the technical-data discussion. Our previously discussed evaluation of 12 acquisition strategies and plans—most of which were approved before OSD and the military departments issued their voluntary guides—showed that program managers may not fully understand how to respond to these revised policies. As we noted, we found that eight of the 12 acquisition strategies and plans we reviewed addressed no more than two of the four requirements (see fig. 2). OSD had not issued an update to the DOD Instruction 5000.02 or the DFARS as of April 2011 to clarify what programs specifically need to do to address the assessments of technical data. OSD officials acknowledged to us that the policies could be rewritten for greater clarity, and they pointed out ambiguities in some of the requirements. For example, they told us that the assessment of technical-data requirement is unclear. The officials told us that if they had the opportunity, they would clarify the requirement to state that program managers (1) assess the data that are needed to re- compete for production, sustainment, or upgrade, and (2) determine what, if any, of that technical data the program requires. Ambiguity in the revised policies results in limits to department decision makers’ ability to exercise effective internal control in their reviews of acquisition documentation. Without clear policies on documenting long- term technical-data needs, program managers may not understand how to respond and, as a result, may continue to submit incomplete acquisition documentation. Without complete documentation, senior level department decision makers are limited in their ability to carry out their internal control responsibilities to ensure that programs are aligned with department policies and priorities. An August 2010 memorandum from the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics called attention to this limitation stating that recent acquisition strategies often did not include sufficient detail on topics including technical-data requirements. The memorandum stated that future acquisition strategies submitted that did not provide all of the required information would be delayed. Delays in the acquisition process can, in turn, hinder DOD’s ability to provide needed materiel to the warfighter. OSD recently added a requirement that program managers conduct a business-case analysis as part of their assessment to determine the long- term technical-data needs for their systems; however, DOD has not issued policy or other internal controls that describe how to conduct this analysis. In November 2010, the Under Secretary of Defense for Acquisition, Technology and Logistics issued a memorandum that requires program officials to take a number of actions to improve efficiency and productivity in defense spending. Among other things, the memorandum requires program managers for all acquisition programs to (1) conduct a business-case analysis that outlines the technical-data rights the government will pursue to ensure competition and (2) include the results of this analysis in acquisition strategies at Milestone B. According to OSD officials, a business-case analysis would require program managers to determine whether the benefits of acquiring technical data are worth the costs of acquiring them. Prior to this memorandum, a formal cost benefit analysis was not required for technical-data decisions. As of January 2011, DOD officials told us that no acquisition program had yet completed this analysis because no program had reached Milestone B since the Under Secretary issued the memorandum. Therefore, we could not evaluate an analysis conducted in response to this new requirement. Since establishing the requirement in its November 2010 memorandum, OSD had not issued policy or other internal controls, as of April 2011, that describe how to conduct the business-case analysis or what information to report in the acquisition strategy. The Under Secretary of Defense for Acquisition, Technology and Logistics stated in the memorandum that the department would take additional actions in support of the memorandum. However, OSD officials told us that they have not decided whether to issue additional clarifying policy to instruct program managers on how to conduct the analysis or what information about the results of the analysis they should include in acquisition strategies. We have previously reported that the military services inconsistently completed similar business-case analyses when DOD had not issued instructions on how to conduct them. In 2008, we found that DOD had not issued a policy instructing program managers on the elements to include in the documentation of the analyses that program managers conducted for decisions on performance based logistics arrangements—a DOD approach to providing support to weapon systems. As a result, program staff conducted business-case analyses that were inconsistent and missing one or more elements recommended by a DOD instruction on economic analyses. We found that DOD officials implemented the performance based logistics arrangements for the sample of programs we reviewed without the benefit of sound and consistent analyses. Among other things, we recommended that DOD clearly define specific criteria for these analyses in DOD policy. DOD partially agreed with our recommendation. To address our recommendation, in April 2011, the Principal Deputy Assistant Secretary of Defense for Logistics and Materiel Readiness issued the Product Support Business-Case Analysis Guidebook. Because OSD has not issued policy instructing program managers on how to conduct and document the analyses, program managers may conduct incomplete or inconsistent analyses and report inconsistently on important elements of the analyses and findings. Similar to the situations we described in our 2008 report, program managers may not include key required elements of business-case analyses, such as assumptions, feasible alternatives, and costs and benefits that support their technical-data decisions. In addition, because OSD has not issued policy instructing program managers on how to report on the results of these analyses, program managers may not provide the information that senior leaders in DOD and the military departments need in order to decide whether to approve the acquisition programs at major milestones in the acquisition process. Technical-data decisions can be costly, with some prime contractors quoting a price in excess of $1 billion for technical-data packages. Thus, decision makers need sufficient details to conduct their reviews and make fully informed decisions. The November 2010 memorandum demonstrates that this negative effect already exists for technical-data-related requirements. DOD has taken meaningful actions that could lead to an increased focus on technical data in defense acquisition—actions that may help DOD improve effectiveness and cost efficiency when acquiring and sustaining its weapon systems. DOD has reflected congressionally mandated and GAO-recommended changes in updated policies to emphasize the importance of discussing and documenting assessments of technical data and data rights in acquisition documentation, but program officials could benefit from additional clarifications to these policies. If DOD does not clarify the level and type of detail required in these updated policies, program managers may continue to inconsistently include the needed information. Furthermore, senior department officials may delay approving these acquisition strategies at major milestone reviews. Delays at major acquisition milestones could postpone the department’s effort to provide needed materiel to the warfighter. Moreover, DOD has required that program managers conduct a business- case analysis to weigh the costs of access to technical data for DOD’s systems against the benefits of acquiring these data. This recently required step may add rigor to decisions to acquire technical data that program managers make early in the process. However, in the absence of DOD- wide instructions to program managers on how to conduct these analyses, program officials may conduct analyses that exclude key elements and therefore do not support optimal decision making for rights to technical data that can cost $1 billion or more. Delaying issuing implementing instructions to program managers for the business-case analysis could slow DOD’s and the military departments’ efforts to answer the Under Secretary of Defense’s call to take a more aggressive approach to finding efficiencies and reducing DOD’s spending where possible in order to better afford its future weapon systems. To establish effective internal controls over technical-data policies that improve DOD’s ability to efficiently and cost-effectively acquire and sustain weapon systems over their life cycles, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics to take the following two actions: Issue updates to the acquisition and procurement policies that clarify requirements for documenting long-term technical-data requirements in program acquisition strategies and acquisition plans. Among other things, DOD should clarify the level and type of detail required for acquiring technical data and technical-data rights that should be included in acquisition strategies and acquisition plans Issue instructions for program managers to use when conducting business-case analyses that are part of the process for determining the levels and types of technical data and technical-data rights needed to sustain DOD’s systems. The instructions should identify the elements to be included in the analyses and the types of information to be documented in reports on the analyses. In written comments on a draft of this report, DOD concurred with our two recommendations. The department’s written comments are reprinted in their entirety in appendix III. DOD also provided technical comments that we have incorporated into this report where applicable. In response to our recommendation that DOD issue updates to the acquisition and procurement policies that clarify requirements for documenting long-term technical-data needs in program acquisition strategies and acquisition plans, DOD stated that it planned to issue a clarification this calendar year. In response to our recommendation that DOD issue instructions for program managers to use when conducting business-case analyses for technical-data decisions, the department stated that it planned to issue guidance this year related to this recommendation. As we agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of the report until 30 days from the report’s date. At that time, we will send copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; and the Under Secretary of Defense for Acquisition, Technology and Logistics. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. To evaluate the extent to which the Department of Defense (DOD) updated its acquisition and procurement policies to reflect certain technical-data-related provisions of the National Defense Authorization Act for Fiscal Year 2007 and GAO’s 2006 recommendations, we reviewed the law, our recommendations, and a variety of documents related to the context of the act and recommendations. We reviewed DOD and military department regulations governing technical-data acquisition and technical- data-related reports issued by GAO and DOD. We compared changes that the department made to its acquisition and procurement policies to respond to the law and our recommendations. Specifically, we analyzed the following policies: (1) a memorandum issued by the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics (OSD), Data Management and Technical Data Rights (July 19, 2007); (2) DOD Instruction 5000.02, Operation of the Defense Acquisition System enclosure 12(9) (Dec. 8, 2008); and (3) the Defense Federal Acquisition Regulation Supplement (DFARS) 207.106 (S-70). We also used information from this evaluation of the policies and their requirements in the analyses we conducted for our other objectives. To obtain DOD’s perspective on changes to these policies as well as information for all three of our objectives, we interviewed officials in a variety of organizations including OSD, acquisition headquarters for each military department, selected program executive offices, and the acquisition programs in our sample. Table 1 lists the organizations we contacted to conduct interviews and obtain documents related to the acquisition of technical data. We also reviewed information in databases in the DOD Office of the Inspector General and GAO that record actions DOD took to implement our recommendations. To evaluate actions DOD and the military departments took to implement additional legislative provisions and audit recommendations related to technical data (that we describe in app. II), we evaluated the requirements in the relevant legislation or the actions called for in the relevant recommendations. We then obtained and analyzed key documentation, such as updates DOD made to the DFARS to implement section 202 of the Weapon Systems Acquisition Reform Act of 2009. To evaluate the extent to which selected defense acquisition programs adhered to the updated requirements in DOD policy to document their systems’ long-term technical-data needs, we selected a non-generalizable sample of 12 acquisition programs from a population of about 50 programs. To draw the sample, we asked the three military departments to identify all acquisition programs at the two highest-value acquisition categories (ACAT I or II) that had reached the first three acquisition milestones—A - Material Solution Analysis, B - Technology Development, and C - Engineering and Manufacturing Development—between September 2007 and August 2010. We chose September 2007 because this was the first point at which both sets of requirements for documenting long-term technical-data needs were in effect, and we chose August 2010 because we selected our sample at that time. Because too few Marine Corps programs reached one of these milestones during this period, we excluded this service from our evaluation. To draw the sample, we selected four programs from each department, balancing the ACAT levels and milestones. We then selected those programs that had most recently, at the time we drew our sample, reached their respective milestones. We used this approach because DOD updated its policies in 2007, and we wanted to allow as much time as possible for the military departments to develop methods to respond to the requirements in DOD’s updated policies. Although findings from this sample are not generalizable to all DOD acquisition programs, the variety of circumstances that programs in our sample face can illustrate important aspects of documenting a system’s long-term technical-data needs. Our sample includes a variety of acquisition programs, including new systems (e.g., Joint High Speed Vessel), modifications to existing systems (e.g., C-130 Avionics Modernization Program), and systems that were primarily software (e.g., Joint Battle Command-Platform). After completing our sample selection, we analyzed the content of each program’s acquisition strategy and acquisition plan, which are required to document the program’s long-term technical-data needs. To conduct these analyses, we compared each program’s acquisition strategy and acquisition plan against certain criteria from the July 2007 memorandum from the Under Secretary of Defense for Acquisition, Technology and Logistics, DOD Instruction 5000.02, and the DFARS. We could not compare the acquisition strategies and plans in our sample to the voluntary guides that OSD and the military departments issued in 2009 and 2010 because the guides were issued after the majority of programs in our sample had completed their acquisition milestone documentation. Two team members concurrently conducted independent analyses of the same documentation. We then compared the two sets of observations and reconciled any differences with the assistance of a third analyst, when necessary. We also provided our preliminary observations of each strategy to officials in each program and considered additional information they provided when our observations indicated that the program had not addressed one or more of the requirements. To evaluate the extent to which DOD has taken actions to improve decision making by program managers on the long-term technical-data needs for systems in the acquisition process, we identified recent steps the department has taken. We interviewed officials in a variety of offices including OSD and the acquisition headquarters offices for each military department. We interviewed the officials responsible for implementing any steps DOD took, and we obtained and evaluated supporting documentation (e.g., the Defense Acquisition Guidebook and guides issued by each military department). The officials we interviewed represent organizations such as the Army’s Product Data and Engineering Working Group and the Air Force’s Product Data Acquisition Team. For each objective, we assessed the reliability of the data we analyzed by reviewing existing documentation related to the data sources and interviewing knowledgeable agency officials about the data that we used. We found the data sufficiently reliable for the purposes of this report. We conducted this performance audit from May 2010 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The information in this appendix supplements the information we provided elsewhere in this report. The legal requirements, audit recommendations, and the Department of Defense (DOD) and military department implementation actions in this appendix are more narrowly focused than those reviewed earlier in this report. Together, the two sets of mandated actions, recommendations, and response actions provide additional information about technical-data-related requirements. To evaluate the actions DOD has taken to implement the technical-data requirements of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 and report recommendations by the Army and Air Force Audit Agencies, we identified and evaluated the requirements and recommendations. We then interviewed relevant DOD and Air Force and Army officials and obtained key documentation such as updates DOD made to the Defense Federal Acquisition Regulation Supplement to implement section 202 of the Weapon Systems Acquisition Reform Act of 2009. Section 822 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009 requires DOD to take two technical-data-related actions in the acquisition process. Table 2 lists these two legislative requirements and DOD’s response to each. The Army Audit Agency and the Air Force Audit Agency recently issued reports that address the acquisition of technical data. Table 3 lists these reports, their recommendations, and the military departments’ responses to the recommendations. Key contributors to this report were Carleen Bennett, Assistant Director; Larry Bridges; Simon Hirschfeld; Amber Keyser; James P. Klein; Katherine Lenane; Richard Powelson; Michael Silver; and Ryan Starks. Federal Contracting: Opportunities Exist to Increase Competition and Assess Reasons When Only One Offer Is Received. GAO-10-833. Washington, D.C.: July 26, 2010. Defense Management: DOD Needs Better Information and Guidance to More Effectively Manage and Reduce Operating and Support Costs of Major Weapon Systems. GAO-10-717. Washington, D.C.: July 20, 2010. Defense Logistics: Improved Analysis and Cost Data Needed to Evaluate the Cost-effectiveness of Performance Based Logistics. GAO-09-41. Washington, D.C.: December 19, 2008. Weapons Acquisition: DOD Should Strengthen Policies for Assessing Technical Data Needs to Support Weapon Systems. GAO-06-839. Washington, D.C.: July 14, 2006. Defense Management: Opportunities to Enhance the Implementation of Performance-Based Logistics. GAO-04-715. Washington, D.C.: August 16, 2004. Defense Logistics: Opportunities to Improve the Army’s and the Navy’s Decision-making Process for Weapons Systems Support. GAO-02-306. Washington, D.C.: February 28, 2002.
Some of the Department of Defense's (DOD) weapon systems remain in the inventory for decades. Therefore, decisions that program officials make during the acquisition process to acquire or not acquire rights to technical data, which may cost $1 billion, can have far-reaching implications for DOD's ability to sustain and competitively procure parts and services for those systems. DOD needs access to technical data to control costs, maintain flexibility in acquisition and sustainment, and maintain and operate systems. In response to a congressional request, GAO reviewed the extent to which: (1) DOD has updated its acquisition and procurement policies to reflect a 2007 law and 2006 GAO recommendations; (2) selected acquisition programs adhered to requirements to document technical-data needs; and (3) DOD took actions to improve technical-data decisions by program managers. GAO interviewed DOD officials, reviewed acquisition strategies and acquisition plans from 12 programs, and compared those documents to relevant DOD policies. DOD updated its acquisition and procurement policies to require that acquisition program managers document their long-term technical-data needs in a manner that reflects a 2007 law and GAO's 2006 recommendations. Together these policies require documentation of: (1) an assessment of technical-data requirements, (2) the merits of a "priced-contract option" that enables DOD to obtain additional technical data that it did not acquire in its initial contract, (3) the contractor's responsibility to verify its assertions of limits to DOD's ability to use the technical data, and (4) the potential for changes in the system's sustainment plan. According to DOD officials, these policy updates do not require changes to the way program managers assess technical-data needs. Sampled acquisition programs partially addressed the four updated technical-data-documentation requirements. Ten of the 12 programs GAO reviewed addressed at least 1 of the 4 requirements in their acquisition strategies and acquisition plans; however, none of the programs addressed all 4 of the requirements. Specifically, 9 of the 12 strategies documented an assessment of their technical-data requirements. For example, the strategy for a Navy communications system stated that the program planned to obtain technical data and associated rights to sustain the system over its life cycle and allow for competitive procurement of future systems. In contrast, 3 of the 12 strategies documented the contractor's responsibility to verify its assertions of limits to DOD's ability to use the technical data. Each of the three strategies noted that the program planned to include a clause in its contracts that identifies the contractor's responsibilities. DOD has issued guides--that are voluntary for the program managers to use--to improve technical-data decision-making. These guides may help program managers with decisions and documentation on technical data. However, DOD technical-data policies remain unclear. Effective internal controls help organizations implement their directives. GAO found that, because DOD has not issued clarifications to its policy, DOD policies that require documentation of long-term technical-data needs are unclear. As a result, acquisition strategies have not always documented required information on technical data--a point the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics recently emphasized. Because of the ambiguity in the policies, DOD's ability to implement effective internal control over those policies is limited. Moreover, DOD recently added a requirement that program managers conduct a business-case analysis for systems' long-term technical-data needs. However, DOD has not issued policy or other internal controls that describe how to conduct this analysis. GAO has previously reported that the military services inconsistently completed similar business-case analyses because DOD had not issued instructions on how to conduct them. Without instructions that describe how to conduct the business-case analysis, senior acquisition decision makers may not receive the information they need to decide whether to approve programs at major milestones in the acquisition process. GAO recommends that DOD (1) update policies to clarify its technical-data documentation requirements and (2) instruct program managers on the elements to include and the information to report for technical-data business-case analyses. DOD concurred with GAO's recommendations.
State child welfare systems consist of a complicated network of policies and programs designed to protect children. Today, these systems must respond to growing numbers of children from families with serious and multiple problems. Many of these families also need intensive and long-term interventions to address these problems. With growing caseloads over the past decade, the systems’ ability to keep pace with the needs of troubled children and their families has been greatly taxed. In addition, the continued growth in caseloads expected over the next few years will give child welfare agencies little relief. When parents or guardians are unable to care for their children, state child welfare agencies face the difficult task of providing temporary placements for children while simultaneously working with a wide array of public and private service providers, as well as the courts, to determine the best long-term placement option. The permanency planning process is guided by federal statute and typically occurs in stages requiring considerable time. Finding an appropriate placement solution is extremely difficult because it often involves numerous steps and many different players. In each case, states must make reasonable efforts to prevent the placement of a child in foster care. If the child must be removed from the home, states are required under the Adoption Assistance and Child Welfare Act to take appropriate steps to make the child’s safe return home possible. Once removed, if reunification with the parents cannot be accomplished quickly, a child will be placed in temporary foster care while state child welfare agencies and community service providers continue to work with the parents in hope of reunification. To be eligible for federal funding, the state must demonstrate to the appropriate court that it has made reasonable efforts to prevent out-of-home placement and to reunify the family. Federal law further requires that placement be as close as possible to the parent’s home in the most family-like setting possible. To guide the permanency planning process by which a state is to find permanent placements for foster care children, the act also requires that the state develop a case plan for each child within 60 days of the time the state agency begins providing services to the child. This plan must describe services to be provided to aid the family and must outline actions that will be expected of various agencies and family members to make reunification possible. States are then required to hold reviews every 6 months before a court or administrative panel to evaluate progress made toward reaching a permanency goal. If progress toward reunification cannot be made, state agencies often face the arduous task of either preparing a case for the termination of parental rights or finding a long-term foster care placement. The federal requirement of conducting a permanency hearing within 18 months serves to ensure that child welfare agencies focus on determining a permanent placement, including return to the family or adoption, in a timely manner rather than continuing a child in foster care. For abused and neglected children, living with their parents may be unsafe. Yet foster care is not an optimal situation, especially not as a permanent solution. State child welfare agencies and the courts are confronted with the dilemma of whether to reunite families as quickly as possible or keep the children in foster care with the expectation of future reunification. They must also determine at what point to abandon hope of reunification, terminate the parents’ rights, and initiate a search for an adoptive home or other permanent placement for the child. If children are reunited with their families too quickly, they may return to foster care because the home environment may still be unstable. On the other hand, when children remain in foster care too long, it is difficult to reestablish emotional ties with their families. Furthermore, the chances for adoption can be reduced because the child is older than the most desirable adoption age or has developed behavioral problems. Determining an appropriate placement option for children quickly is of twofold importance. First, finding permanent placements for children removed from their families is critical to ensure their overall well-being. Children without permanent homes and stable caregivers may be more likely to develop emotional, intellectual, and behavioral problems. A second reason for placing children more quickly is the financial costs of children remaining in foster care. The federal share of the average monthly maintenance payment for title IV-E was $574 in 1996. While some options for permanent placements, such as providing long-term support to a relative to care for a child, may not realize cost savings, other options, such as adoption, will reduce foster care costs. Title IV-E payments, between fiscal years 1984 and 1996, increased from $435.7 million to an estimated $3.1 billion. The prolonged stays of children in foster care have prompted states to enact laws or policies to shorten to less than the federally allowed 18 months the time between entry into foster care and the first permanency hearing at which permanent placement is considered. As shown in figure 1, 23 states have enacted such laws, with a majority of these requiring the hearing to be held within 12 months. In two states, the shorter time frame applies only to younger children. Colorado requires the permanency hearing be held within 6 months for children under 6, and Washington requires the hearing to be held within 12 months for children 10 years old or younger. An additional three states, while not enacting such statutes, have policies requiring permanency hearings earlier than 18 months. For a description of the 26 state statutes, policies, and time requirements, see appendix II. The remaining 24 states and the District of Columbia have statutes consistent with the federal requirement of 18 months. The state laws, like federal law, do not require that a final decision be made at the first hearing. Ohio and Minnesota, however, do require that a permanency decision be determined after a limited extension period. Ohio, for example, requires a permanency hearing to be held within 12 months, with a maximum of two 6-month extensions. At the end of that time, a permanent placement decision must be made. According to officials in Ohio’s Office of Child Care and Family Services, this requirement was included in an effort to expedite the permanency planning process and reduce the time children spend in foster care. However, state officials also believed that this requirement may have had the unintended result of increasing the number of children placed in long-term foster care because other placement options could not be developed. State data, in part, confirmed this observation. While long-term foster care placements for children supported with state-only funds dropped from 1,301 in 1990 to 779 in 1995, long-term placements for children supported with federal funds rose from 1,657 to 2,057 for the same period. The reasons for the difference between these two groups are unknown. Although the states we reviewed did not systematically evaluate the impact of their initiatives, they implemented a variety of operational and procedural changes to expedite and improve the permanency process. Other efforts made changes to the operation of the courts and the use of resources available to them for making permanency decisions. These states reported that these actions have improved the lives of some children by (1) reuniting them with their families more quickly, (2) expediting the termination of parental rights when reunification efforts were determined to be unfeasible—thus making it possible for child welfare agencies to begin looking for an adoptive home sooner—or (3) reducing the number of different foster care placements in which they lived. States are also addressing changes in the permanency planning process through larger reform efforts of their child welfare systems. However, because these efforts were only recently implemented or were still in the initial implementation stage, no evaluation information on their effect was available. Two states we reviewed implemented low-cost and creative methods for financing and providing services that address specific barriers to reunification. For example, Arizona’s Housing Assistance Program focused on families where children had been removed and placed in state custody and the major barrier to reunification was inadequate housing for the family. In 1989, the state enacted a bill authorizing the use of state foster care funds to provide special housing assistance. According to state reports summarizing the program and statistics provided by Arizona Department of Economic Security officials, between 1991 and 1995, 939 children were reunited with their families as a result of this program, representing almost 12 percent of those children reunified during this period. This program saved the state over $1 million in foster care-related costs between 1991 and 1995. Also, Tennessee’s Wraparound Funding Program allowed caseworkers to use state funds to provide services that removed economic barriers to reunification. These services were not typically associated with traditional reunification services and prior to this program were not allowable foster care expenditures. Examples include home or car repairs, utilities or rent payments, and respite care. According to a report summarizing the program, during one 6-month period in 1995, the program provided services to 1,279 children. A state Department of Children’s Services official estimated that had these children remained in care as long as the average child in foster care, the state would have incurred an additional $700,000 in state and federal foster care maintenance payments. Regarding other changes, Arizona and Kentucky placed special emphasis on expediting the process by which parental rights could be terminated. Arizona’s Severance Project focused on cases where termination of parental rights was likely or reunification services were not warranted and for which a backlog of cases had developed. In April 1986, the state enacted a bill providing funds for hiring severance specialists and legal staff to work on termination cases. The following year, in 1987, the state implemented the Arizona State Adoption Project. This project focused on identifying additional adoptive homes, including recruiting adoptive parents for specific children and contracting for adoptive home recruitment activities. State officials reported that the Adoption Project resulted in a 54-percent increase in the number of new homes added to the state registry in late 1987 and 1988. In addition, they noted that the Severance Project contributed to a more than 32-percent reduction in the average length of stay between entry into care and the filing of the termination petition for fiscal years 1991 through 1995. To reduce a backlog of pending cases, Kentucky’s Termination of Parental Rights Project focused on reducing the time required to terminate parental rights once this permanency goal was established. This effort included retraining caseworkers, lawyers, and judges on the consequences of long stays in foster care and streamlining and improving the steps caseworkers must follow when collecting and documenting the information required for the termination procedures. A report on this effort indicated that between 1989 and 1991, the state decreased the average time to terminate parental rights by slightly over 1 year. In addition, between 1988 and 1990, the average length of stay for children in foster care decreased from 2.8 years to 2 years, and the number of different foster care placements for each child decreased from four to three. However, as the number of children available for adoption rose, the state was forced to focus its efforts on identifying potential adoptive homes and shifted its emphasis to strategies to better inform the public about the availability of adoptive children. Tennessee’s Concurrent Planning Program allowed caseworkers to work toward achieving family reunification while at the same time developing an alternate permanency plan if reunification efforts did not succeed. The goal was to obtain permanency for the child by either (1) strengthening the family and reducing the risks in the home so that the child can be reunified with his or her family; or (2) verifying that the family cannot protect the child, meet the child’s needs, or reduce the risks to the child in a timely manner and that termination of parental rights should be pursued. By working on the two plans simultaneously, caseworkers reduced the time required to prepare the necessary paperwork to terminate parental rights if reunification efforts failed. Under a concurrent planning approach, caseworkers emphasize to the parents that if they do not adhere to the requirements set forth in their case plan, parental rights can be terminated. Since this program was initiated in 1991, state officials report that 70 percent of the children in the program obtained permanency, primarily through reunification, within 12 months of placement in foster care. Without this program the children would have stayed in foster care longer than 12 months. The officials attributed obtaining quicker permanency in part to parents making more concerted efforts to make the changes needed to have their children return home. All decisions regarding both the temporary and final placement of foster care children come through states’ court systems. As a result, some states and counties focused attention on the courts’ involvement in achieving permanency more quickly. Georgia’s Citizen Review Panel Program created local advisory panels of private citizens within the child’s community to assist judges in their review and decisions regarding foster care placements for each child in care. The objective of these panels is (1) to gather additional information regarding the placement options for each foster child—often information that cannot be collected by state agencies because of large caseloads and limited staff resources—and (2) to review compliance with court-ordered case plans to ensure that the state agencies are working toward permanent placements. The program operates in 56 counties and, in 1996, covered over 42 percent of Georgia’s foster care population. The state reported that between 1994 and 1996, the review panels recommended that 5,855 children be placed for adoption, 10,845 children be reunified with their families, and 3,048 children remain in foster care. In Hamilton County, Ohio, juvenile court officials focused attention on the court’s involvement in achieving permanency more quickly by developing new procedures to expedite case processing. In 1985, they revised court procedures by (1) designating lawyers specially trained in foster care issues as magistrates to hear cases, (2) assigning one magistrate to each case for the life of that case to achieve continuity and consistent rulings, and (3) agreeing at the end of every hearing—while all participants are present—to the date for the next hearing. According to court officials, the county saved thousands of dollars because it could operate three magistrates’ courtrooms for the cost of one judge’s courtroom. Also, a report on court activities indicated that because of these changes, between 1986 and 1990, the number of children placed in four or more different foster care placements decreased by 11 percent and the percentage of children leaving temporary and long-term foster care in 2 years or less increased from 37 percent to 75 percent. Even where improvements have been made, there can still be problems that are beyond the control of officials. According to reports prepared by court officials, between 1986 and 1989 the number of children in care in Hamilton County decreased 15 percent. However, in 1992, the number returned to the 1986 level of about 1,100 children and continued to increase through the first half of 1996 to about 1,500. According to court officials, a dramatic rise in crack cocaine use in the county contributed to this sharp increase. Child welfare agencies were unable to readily arrange for the increased services that these families needed. Some states are also addressing the need for quicker permanency as part of larger initiatives designed to make major changes in their foster care programs. One state plans to privatize foster care services. Another state has redesigned its foster care operating policies and procedures to improve outcomes for children. Because these efforts are recent, no information on results was available. In 1996, Kansas began privatizing most child welfare services, including foster care. Two events contributed to this decision. First, because of rising state costs, the Governor directed all state agencies to consider privatizing services to reduce the size of the state workforce. Second, the state had settled a suit brought by the Kansas chapter of the American Civil Liberties Union citing unacceptable increases in the number of children in foster care and lengthy stays in care. The goal of privatization is to allow children in out-of-home placements to experience a minimal number of placements or to achieve permanency in their lives in the shortest time possible. Kansas contracted with private social services agencies for family preservation services, foster and residential care, and adoption services. State officials continue to be responsible for determining if the original charges of dependency, neglect, or abuse are substantiated and to monitor contractor performance. The contracted service providers are responsible for providing all services to the families. Under the contracts, providers will be paid a per-child rate, with a payment structure that pays contractors for results. For example, in the foster care contract, 25 percent of costs will be paid at the time of referral, 25 percent upon receipt of the first 60-day progress report, and 25 percent upon receipt of the 180-day formal case plan. The final 25 percent will not be paid until reunification or a permanent placement is achieved. If a child reenters care before 12 months have passed, the contractor is responsible for all the foster care maintenance costs for out-of-home placement. Arizona also is pursuing major changes to its child welfare system. Arizona’s Project Redesign was prompted by a number of fatalities of young children in foster homes in a very short time. Begun in 1994, this project focused on writing and implementing new child welfare policies and procedures with a goal of increasing caseworker contact with foster families and reducing caseworkers’ caseloads and the length of time children spend in foster care. The major activities of Project Redesign included rewriting policies and licensing rules, preparing a new supervisors’ handbook, creating a mentoring program for new supervisors, developing and implementing a method to more equitably distribute workload among staff, and creating the Uniform Case Practice Record. This record methodically guides caseworkers through all the steps necessary to make a permanent placement decision. This helps ensure that all the needed information is available to the courts, thus preventing delays in the process. Our efforts to assess the overall impact of these initiatives were hampered by the absence of evaluation data. In general, we found that the states did not conduct evaluations of their programs, and outcome information was often limited to state reports and the observations of state officials. While many of these efforts reported improvements, for example, in speeding the termination of parental rights once this permanency goal was established, the lack of comparison groups or quality pre-initiative data made it difficult to reach definitive conclusions about the effectiveness of these initiatives. Several national efforts are under way that may improve the information available on foster children and facilitate states’ design and implementation of systematic evaluations in the future. Nationwide, most states are currently designing or implementing Statewide Automated Child Welfare Information Systems as required under the title IV-E foster care program. These systems are to include case-specific data on all children in foster care and all adopted children placed or provided adoption assistance by the state or its contractors. From 1994 to 1996, federal funds have provided up to 75 percent of the costs of planning, design, development, and installation of these state systems. The Personal Responsibility and Work Opportunity Reconciliation Act (P.L. 104-193), enacted in August 1996, continues this enhanced federal match through 1997, at which time the federal match rate will be reduced to 50 percent. In addition, P.L. 104-193 appropriated funds for a national longitudinal study based on random samples of children at risk of abuse or neglect or determined by a state to have been abused or neglected. This study is to include state-level data for selected states. States increased their chances for successfully developing and implementing new initiatives when certain key factors were a part of the process. When contemplating changes, state officials had to take into consideration the intricacies of the foster care process; the inherent difficulty that caseworkers and court officials face when deciding if a child should be returned home; and the need in some cases to change the culture of caseworkers and judges to recognize that, in certain cases, termination of parental rights should be pursued. Some experts believe that current child welfare practices often discourage caseworkers from finding permanent placements other than with the biological parents. Officials in the states we reviewed recognized that addressing these challenges required concerted time and effort, coordination, and resources. These officials identified several critical, often interrelated, factors required to meet these challenges. These included (1) long-term involvement of officials in leadership positions; (2) involvement of key stakeholders in developing consensus and obtaining buy-in concerning the nature of the problem and the solution; and (3) the availability of resources to plan, implement, and sustain the project. The following two examples illustrate these concepts. In the mid-1980s, Ohio officials began a multiyear effort that culminated with the state enacting a new child welfare law that became effective in January 1989. Before enacting this law, the legislature created a task force whose members were involved in planning throughout the drafting and passage of legislation. The task force was cochaired by a state senator and a representative. Other members included state and county child welfare agency officials, juvenile court judges, attorneys, and county commissioners. In addition, public hearings were held throughout the state that provided a forum for input from all parties interested in child welfare, including private citizens, service providers, caseworkers, judges, attorneys, and foster care parents. By involving all interested parties and by providing numerous opportunities for input, state officials were able to develop consensus on the problems and solutions and obtain buy-in to the proposed solutions from program staff. For example, there were numerous discussions about whether a specific time frame for remaining in temporary foster care should be stipulated. They ultimately compromised on 12 months plus two 6-month extensions. In 1988, to shorten the termination of parental rights process, the Kentucky Department of Social Services collaborated with seven other agencies to obtain a federal grant to develop new approaches to address this issue. As part of this effort and to ensure buy-in, the Secretary of Human Resources appointed a multidisciplinary advisory committee chaired by a chief Circuit Court judge. Other members of the committee included representatives from social service agencies, court officials, attorneys, the legislature, and child welfare advocacy groups. The committee met quarterly throughout the 2-year project. Committee members recognized they needed to change the way caseworkers and members of the legal system viewed termination of parental rights. Many caseworkers had viewed terminating parental rights as a failure on their part because they were not able to reunify the family. As a result, they were reluctant to pursue termination and instead kept the children in foster care. Also, often judges and lawyers were not sufficiently informed of the negative consequences for children who do not have permanent homes. Thus, as part of this project, newsletters and training were provided about the effects on the child of delaying termination of parental rights. After 2 years, many meetings, and retraining caseworkers, state officials reported that they had reduced the time to complete the termination of parental rights process by 1 year. Among the changes they believed contributed to this reduction were (1) simplifying the process caseworkers follow when providing termination of parental rights information to the attorneys that handle these cases and (2) using an absent parent search handbook, which was developed to assist caseworkers in conducting more timely and complete searches. Many of the children in foster care are among the nation’s most vulnerable citizens. The consequences of long spells in foster care and multiple placements, coupled with the effects of poverty, highlight the need for quick resolution of placement questions for these children. With the expected rise in foster care caseloads through the start of the next century further straining state and federal child welfare budgets, increasing pressure will be placed on states to develop strategies to move children into permanent placements more quickly. Many of these initiatives will need to address the difficult issues of deciding under what circumstances to pursue reunification and what time is appropriate before seeking the termination of parental rights. We found promising initiatives for changing parts of the permanency process so that children can be moved out of foster care into permanent placements more quickly. Developing and successfully implementing these innovative approaches takes time and often challenges long-standing beliefs. To succeed, these initiatives must look to local leadership involvement, consensus building, and sustained resources. As new initiatives become a part of the complex child welfare system, however, they can also create unintended consequences. For example, if states are identifying appropriate cases for the quicker termination of parental rights and processing them more expeditiously—thereby freeing more children for possible adoption—additional problems can occur if efforts to develop more adoptive homes have not been given equal emphasis. Also, if states require more stringent time frames for holding permanency hearings, they must adjust to this shorter time to avoid placements based on expedience rather than careful deliberation about what is best for the child. We also found that a critical feature of these initiatives was often absent: Many of them lacked evaluations designed to assess the impact of the effort. The availability of evaluation information from these initiatives would not only point to the relative success or failure of an effort but also such information could assist in identifying unintended outcomes. The absence of program and evaluation data will continue to hinder the ability of program officials and policymakers to fully understand the overall impact of these initiatives. Efforts are under way, however, to improve the availability of information on foster children. In its written comments on a draft of this report, HHS generally concurred with the conclusions in this report. It agreed that efforts to improve the timeliness of permanent placements are important and indicated that they are a priority of the department. HHS also commented that it would be useful to include a definition of permanency planning in the report, and we revised the report in response to this comment. Although federal requirements establish some guidelines, variation in state policies and priorities make the development of a single definition difficult. Finally, the department recognized the benefits of presenting different approaches to speeding the permanency planning processes while stressing the need for systemic changes. Because of the complex nature of the child welfare system, we agree that states and localities must consider the entire system when attempting to make reforms. We have incorporated the department’s technical comments into our report where appropriate. See appendix III for HHS’ comments. We are sending copies of this report to the Secretary of HHS, state child welfare agencies, and other interested parties. Copies also will be made available to others on request. If you or your staff have any questions about this report, please call me at (202) 512-7215. Other major contributors to this report are listed in appendix IV. To identify states that have enacted laws or implemented policies establishing requirements regarding the timing of the first permanency hearing that are more stringent than those under federal law, we reviewed pertinent state legislation and policies of 50 states and the District of Columbia. We also discussed those laws and state policies with state legal and child welfare officials. Federal law allows the hearing to be held as late as 18 months after the child’s entry into foster care, but state laws vary widely in the terms they use for various hearings. In cases where state law did not specifically identify a hearing as a permanency hearing, we asked for further clarification from state officials. If we determined that the state law was consistent with the federal requirement, we treated the required hearing as a permanency hearing. To determine what changes states and localities have made to achieve more timely permanent placements and factors that contributed to their success, we first reviewed literature on foster care and permanency planning. In addition, we discussed permanency planning and permanent placement decisions with experts in the field, including child welfare officials in all 50 states and the District of Columbia. In the course of our discussions with state officials and experts, we identified specific state and local initiatives that were attempting to permanently place foster care children in a more timely manner. We selected six states that had implemented initiatives that addressed making more timely permanent placements for children in foster care. The states were Arizona, Georgia, Kansas, Kentucky, Ohio, and Tennessee. Each state selected had at least one initiative that was implemented between 1989 and 1992, ensuring that we would be able to obtain historical information about the planning and implementation of those initiatives and that the initiatives had been in place long enough to have some impact. We included states that had initiatives that addressed different aspects of the permanency process. We also included states with statutory requirements for holding the first permanency hearing that were stricter than the federal requirement as well as states with requirements that were consistent with the federal requirement. We conducted site visits in four of the six states—Georgia, Kansas, Kentucky, and Tennessee—and obtained information from Arizona and Ohio through telephone interviews. We interviewed state and county foster care and adoption officials and juvenile court officials and collected information on the initiatives, including descriptions of program goals and objectives and factors that facilitated change, reports on program results, and other statistical information on the foster care population. We did not verify program data from these states. We did our work between January 1996 and January 1997 in accordance with generally accepted government auditing standards. Ariz. Rev. Stat. Ann., Section 8-515.C.(West Supp. 1996) Colo. Rev. Stat., Section 19-3-702(1)(Supp. 1996) Conn. Gen. Stat. Ann., Section 46b-129(d),(e) (West 1995) Ga. Code Ann., Section 15-11-419 (j),(k)(1996) 705 Ill. Comp. Stat. Ann., 405/2-22(5)(West Supp. 1996) Ind. Code Ann., Section 31-6-4-19(c)(Michie Supp. 1996) Iowa Code Ann., Section 232.104 (West 1994) Kan. Stat. Ann., Section 38-1565(b),(c)(1995) La. Ch. Code Ann., Arts. 702,710(West 1995) Mich. Stat. Ann., Section 27.3178(598.19a)(Law Co-op Supp. 1996) Minn. Stat. Ann., Section 260.191 Subd. 3b(West Supp. 1997) Miss. Code Ann., Section 43-21-613 (3)(1993) New Hampshire Court Rules Annotated, Abuse and Neglect, Guideline 39 (Permanency Planning Review) N.Y. Jud. Law, Section 1055(b)(McKinney Supp. 1997) Ohio Rev. Code Ann., Sections 2151.353(F), 2151.415 (A) (Anderson 1994) 42 Pa. Cons. Stat. Ann., Section 6351(e-g)(West Supp. 1996) R.I. Gen. Laws, Section 40-11-12.1(1990) (continued) S.C. Code Ann., Section 20-7-766(Law. Co-op. Supp. 1996) Utah Code Ann., Sections 78-3a-312,(1996) Va. Code Ann., Section 16.1-282(Michie 1996) Wash. Rev. Code Ann., Section 13.34.145(3)(4) (West Supp. 1997) W. Va. Code, Sections 49-6-5, 49-6-8(1996) Wis. Stat. Ann., Sections 48.355(4); 48.38; 48.365(5)(West 1987) Wyo. Stat. Ann., Section 14-6-229 (k)(Michie Supp. 1996) Michigan’s time frame to hold the permanency hearing was calculated by adding the days needed to conduct the preliminary hearing, trial, dispositional hearing, and the permanency hearing. Virginia’s time frame to hold the permanency hearing was calculated by adding the number of months required to file the petition to hold the permanency hearing plus the number of days within which the court is required to schedule the hearing. In addition to those named above, Diana Eisenstat served as an adviser; David D. Bellis, Octavia V. Parks, and Rathi Bose coauthored the report and contributed significantly to all data-gathering and analysis efforts. Also, Julian P. Klazkin provided legal analysis of state statutes. Child Welfare: States’ Progress in Implementing Family Preservation and Support Activities (GAO/HEHS-97-34, Feb. 18, 1997). Child Welfare: Complex Needs Strain Capacity to Provide Services (GAO/HEHS-95-208, Sept. 26, 1995). Child Welfare: Opportunities to Further Enhance Family Preservation and Support (GAO/HEHS-95-112, June 15, 1995). Foster Care: Health Needs of Many Young Children Unknown and Unmet (GAO/HEHS-95-114, May 26, 1995). Foster Care: Parental Drug Abuse Has Alarming Impact on Young Children (GAO/HEHS-94-89, Apr. 4, 1994). Residential Care: Some High-Risk Youth Benefit, But More Study Needed (GAO/HEHS-94-56, Jan. 28, 1994). Foster Care: Services to Prevent Out-of-Home Placements Are Limited by Funding Barriers (GAO/HRD-93-76, June 29, 1993). Foster Care: State Agencies Other Than Child Welfare Can Access Title IV-E Funds (GAO/HRD-93-6, Feb. 9, 1993). Foster Care: Children’s Experiences Linked to Various Factors; Better Data Need (GAO/HRD-91-64, Sept. 11, 1991). Child Welfare: Monitoring Out-of-State Placements (GAO/HRD-91-107BR, Sept. 3, 1991). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed states efforts to improve the permanency planning process and reduce the time a child spends in foster care, focusing on what: (1) statutory and policy changes states have made to limit the time allowed to determine permanent placements for foster children; (2) changes states or localities have made in their operations in an attempt to achieve more timely permanent placements and what the impact of those changes has been; and (3) factors officials believe helped them meet the challenges of achieving more timely permanent placements. GAO noted that: (1) signaling the importance of a permanent placement to the well-being of children, 23 states have enacted laws establishing requirements regarding the timing of the permanency hearing that are more stringent than those under federal law; (2) federal law requires a hearing within 18 months after the child's entry into foster care; (3) an additional three states, while not enacting such statutes, have imposed similar requirements as a matter of policy; (4) statutory or policy changes alone, however, are not sufficient to resolve the final placement of foster children more quickly; (5) the states GAO reviewed have made changes in their operations to facilitate reunifying children with their families, expedite terminating parental rights when reunification efforts have failed, or modify the role and operations of the court both to streamline the process and to make well-informed permanent placement decisions; (6) while these initiatives focus on certain stages of the permanency planning process, such as when a child first enters foster care, two states are implementing major changes to their overall foster care systems; (7) although initiatives are in place, most of these states have not systematically evaluated the impact of them, and data concerning these efforts were limited; (8) however, most states did report that many of these initiatives contributed to reducing the time spent in foster care or decreasing the total number of placement changes while a child is in foster care; (9) state officials identified a number of factors that helped them meet the challenges involved in making changes; (10) in some cases, child welfare officials and staff had to undergo significant culture change, modifying long-held views about the merits of pursuing termination of parental rights versus family reunification; (11) they found that changing the way they approached making decisions about the well-being of children and their families was a lengthy process; (12) to implement these initiatives successfully, program officials believed that it was necessary to have the long-term and active involvement of key officials at all levels, including the governor, legislators, and agency officials as well as caseworkers, service providers, attorneys, and judges; (13) this participation was essential to define the problem and reach consensus; and (14) doing so required considerable coordination efforts and an extended commitment of resources.
As we reported in May 2011, DHS implemented the Electronic System for Travel Authorization (ESTA) to meet a statutory requirement intended to enhance Visa Waiver Program security and took steps to minimize the burden on travelers to the United States added by the new requirement. However, DHS had not fully evaluated security risks related to the small percentage of Visa Waiver Program travelers without verified ESTA approval. DHS developed ESTA to collect passenger data and complete security checks on the data before passengers board a U.S. bound carrier. DHS requires applicants for Visa Waiver Program travel to submit biographical information and answers to eligibility questions through ESTA prior to travel. Travelers whose ESTA applications are denied must apply for a U.S. visa for travel to the United States. In developing and implementing ESTA, DHS took several steps to minimize the burden associated with ESTA use. For example, ESTA reduced the requirement that passengers provide biographical information to DHS officials from every trip to once every 2 years. In addition, because of ESTA, DHS informed passengers who do not qualify for Visa Waiver Program travel that they need to apply for a visa before they travel to the United States. Moreover, most travel industry officials we interviewed in six Visa Waiver Program countries praised DHS’s widespread ESTA outreach efforts, reasonable implementation time frames, and responsiveness to feedback, but expressed dissatisfaction over ESTA fees paid by ESTA applicants. In 2010, airlines complied with the requirement to verify ESTA approval for almost 98 percent of the Visa Waiver Program passengers prior to boarding, but the remaining 2 percent— about 364,000 travelers— traveled under the Visa Waiver Program without verified ESTA approval. In addition, about 650 of these passengers traveled to the United States with a denied ESTA. As we reported in May 2011, DHS had not yet completed a review of these cases to know to what extent they pose a risk to the program. At the time of our report, DHS officials told us that there was no official agency plan for monitoring and oversight of ESTA. DHS tracked some data on passengers that traveled under the Visa Waiver Program without verified ESTA approval but did not track other data that would help officials know the extent to which noncompliance poses a risk to the program. Without a completed analysis of noncompliance with ESTA requirements, DHS was unable to determine the level of risk that noncompliance poses to Visa Waiver Program security and to identify improvements needed to minimize noncompliance. In addition, without analysis of data on travelers who were admitted to the United States without a visa after being denied by ESTA, DHS could not determine the extent to which ESTA was accurately identifying individuals who should be denied travel under the program. In May 2011, we recommended that DHS establish time frames for the regular review and documentation of cases of Visa Waiver Program passengers traveling to a U.S. port of entry without verified ESTA approval. DHS concurred with our recommendation and has established procedures to review quarterly a sample of noncompliant passengers to evaluate potential security risks associated with the ESTA program. Further, in May 2011 we reported that to meet certain statutory requirements, DHS requires that Visa Waiver Program countries enter into three information-sharing agreements with the United States; however, only half of the countries had fully complied with this requirement and many of the signed agreements have not been implemented. The 9/11 Act specifies that each Visa Waiver Program country must enter into agreements with the United States to share information regarding whether citizens and nationals of that country traveling to the United States represent a threat to the security or welfare of the United States and to report lost or stolen passports. DHS, in consultation with other agencies, has determined that Visa Waiver Program countries can satisfy the requirement by entering into the following three bilateral agreements: (1) Homeland Security Presidential Directive (HSPD) 6, (2) Preventing and Combating Serious Crime (PCSC), and (3) Lost and Stolen Passports.  HSPD-6 agreements establish a procedure between the United States and partner countries to share watchlist information about known or suspected terrorists. As of January 2011, 19 of the 36 Visa Waiver Program countries had signed HSPD-6 agreements, and 13 had begun sharing information according to the signed agreements. Noting that the federal government continues to negotiate HSPD-6 agreements with Visa Waiver Program countries, officials cited concerns regarding privacy and data protection expressed by many Visa Waiver Program countries as reasons for the delayed progress. According to these officials, in some cases, domestic laws of Visa Waiver Program countries limit their ability to commit to sharing some information, thereby complicating and slowing the negotiation process. In November 2011, a senior DHS official reported that 21 of the 36 Visa Waiver Program countries have signed HSPD-6 agreements.  The PCSC agreements establish the framework for law enforcement cooperation by providing each party automated access to the other’s criminal databases that contain biographical, biometric, and criminal history data. As of January 2011, 18 of the 36 Visa Waiver Program countries had met the PCSC information-sharing agreement requirement, but the networking modifications and system upgrades required to enable this information sharing to take place have not been completed for any Visa Waiver Program countries. According to officials, DHS is frequently not in a position to influence the speed of PCSC implementation for a number of reasons. For example, according to DHS officials, some Visa Waiver Program countries require parliamentary ratification before implementation can begin. Also U.S. and partner country officials must develop a common information technology architecture to allow queries between databases. A senior DHS official reported in November 2011 that the number of Visa Waiver Program countries meeting the PCSC requirement had risen to 21.  The 9/11 Act requires Visa Waiver Program countries to enter into an agreement with the United States to report, or make available to the United States through Interpol or other means as designated by the Secretary of Homeland Security, information about the theft or loss of passports. As of November 2011, all Visa Waiver Program countries were sharing lost and stolen passport information with the United States, and 35 of the 36 Visa Waiver Program countries had entered into Lost and Stolen Passport agreements according to senior DHS officials. DHS, with the support of interagency partners, established a compliance schedule requiring the last of the Visa Waiver Program countries to finalize these agreements by June 2012. Although termination from the Visa Waiver Program is one potential consequence for countries not complying with the information-sharing agreement requirement, U.S. officials have described it as undesirable. DHS, in coordination with the Department of State and the Department of Justice, developed measures short of termination that could be applied to countries not meeting their compliance date. Specifically, DHS helped write a classified strategy document that outlines a contingency plan listing possible measures short of termination from the Visa Waiver Program that may be taken if a country does not meet its specified compliance date for entering into information-sharing agreements. The strategy document provides steps that would need to be taken prior to selecting and implementing one of these measures. According to officials, DHS plans to decide which measures to apply on a case-by-case basis. In addition, as of May 2011, DHS had not completed half of the most recent biennial reports on Visa Waiver Program countries’ security risks in a timely manner. In 2002, Congress mandated that, at least once every 2 years, DHS evaluate the effect of each country’s continued participation in the program on the security, law enforcement, and immigration interests of the United States. According to officials, DHS assesses, among other things, counterterrorism capabilities and immigration programs. However, DHS had not completed the latest biennial reports for 18 of the 36 Visa Waiver Program countries in a timely manner, and over half of these reports are more than 1 year overdue. Further, in the case of 2 countries, DHS was unable to demonstrate that it had completed reports in the last 4 years. DHS cited a number of reasons for the reporting delays. For example, DHS officials said that they intentionally delayed report completion because they frequently did not receive mandated intelligence assessments in a timely manner and needed to review these before completing Visa Waiver Program country biennial reports. We noted that DHS had not consistently submitted these reports in a timely matter since the legal requirement was made biennial in 2002, and recommended that DHS take steps to address delays in the biennial country review process so that the mandated country reports can be completed on time. DHS concurred with our recommendation and reported that it would consider process changes to address our concerns with the timeliness of continuing Visa Waiver Program reports. As we reported in April 2011, ICE CTCEU investigates and arrests a small portion of the estimated in-country overstay population due to, among other things, ICE’s competing priorities; however, these efforts could be enhanced by improved planning and performance management. CTCEU, the primary federal entity responsible for taking enforcement action to address in-country overstays, identifies leads for overstay cases; takes steps to verify the accuracy of the leads it identifies by, for example, checking leads against multiple databases; and prioritizes leads to focus on those the unit identifies as being most likely to pose a threat to national security or public safety. CTCEU then requires field offices to initiate investigations on all priority, high-risk leads it identifies. According to CTCEU data, as of October 2010, ICE field offices had closed about 34,700 overstay investigations that CTCEU headquarters assigned to them from fiscal year 2004 through 2010. These cases resulted in approximately 8,100 arrests (about 23 percent of the 34,700 investigations), relative to a total estimated overstay population of 4 million to 5.5 million. About 26,700 of those investigations (or 77 percent) resulted in one of three outcomes. In 9,900 investigations, evidence was uncovered indicating that the suspected overstay had departed the United States. In 8,600 investigations, evidence was uncovered indicating that the subject of the investigation was in-status (e.g., the subject filed a timely application with the United States Citizenship and Immigration Services (USCIS) to change his or her status and/or extend his or her authorized period of admission in the United States). Finally, in 8,200 investigations, CTCEU investigators exhausted all investigative leads and could not locate the suspected overstay. Of the approximately 34,700 overstay investigations assigned by CTCEU headquarters that ICE field offices closed from fiscal year 2004 through 2010, ICE officials attributed the significant portion of overstay cases that resulted in a departure finding, in-status finding, or with all leads being exhausted generally to difficulties associated with locating suspected overstays and the timeliness and completeness of data in DHS’s systems used to identify overstays. Further, ICE reported allocating a small percentage of its resources in terms of investigative work hours to overstay investigations since fiscal year 2006, but the agency expressed an intention to augment the resources it dedicates to overstay enforcement efforts moving forward. Specifically, from fiscal years 2006 through 2010, ICE reported devoting from 3.1 to 3.4 percent of its total field office investigative hours to CTCEU overstay investigations. ICE attributed the small percentage of investigative resources it reported allocating to overstay enforcement efforts primarily to competing enforcement priorities. According to the ICE Assistant Secretary, ICE has resources to remove 400,000 aliens per year, or less than 4 percent of the estimated removable alien population in the United States. In June 2010, the Assistant Secretary stated that ICE must prioritize the use of its resources to ensure that its efforts to remove aliens reflect the agency’s highest priorities, namely nonimmigrants, including suspected overstays, who are identified as high risk in terms of being most likely to pose a risk to national security or public safety. As a result, ICE dedicated its limited resources to addressing overstays it identified as most likely to pose a potential threat to national security or public safety and did not generally allocate resources to address suspected overstays that it assessed as non- criminal and low risk. ICE indicated it may allocate more resources to overstay enforcement efforts moving forward, and that it planned to focus primarily on suspected overstays who ICE has identified as high risk or who recently overstayed their authorized periods of admission. ICE was considering assigning some responsibility for noncriminal overstay enforcement to its Enforcement and Removal Operations (ERO) directorate, which has responsibility for apprehending and removing aliens who do not have lawful immigration status from the United States. However, ERO did not plan to assume this responsibility until ICE assessed the funding and resources doing so would require. ICE had not established a time frame for completing this assessment. We reported in April 2011 that by developing such a time frame and utilizing the assessment findings, as appropriate, ICE could strengthen its planning efforts and be better positioned to hold staff accountable for completing the assessment. We recommended that ICE establish a target time frame for assessing the funding and resources ERO would require in order to assume responsibility for civil overstay enforcement and use the results of that assessment. DHS officials agreed with our recommendation and stated that ICE planned to identify resources needed to transition this responsibility to ERO as part of its fiscal year 2013 resource planning process. DHS has not yet implemented a comprehensive biometric system to match available information provided by foreign nationals upon their arrival and departure from the United States. In 2002, DHS initiated the United States Visitor and Immigrant Status Indicator Technology Program (US-VISIT) to develop a comprehensive entry and exit system to collect biometric data from aliens traveling through U.S. ports of entry. In 2004, US-VISIT initiated the first step of this program by collecting biometric data on aliens entering the United States. In August 2007, we reported that while US-VISIT biometric entry capabilities were operating at air, sea, and land ports of entry, exit capabilities were not, and that DHS did not have a comprehensive plan or a complete schedule for biometric exit implementation. Moreover, in November 2009, we reported that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work that needed to be accomplished to deliver a comprehensive exit solution as part of the US-VISIT program. We concluded that, without a master schedule that was integrated and derived in accordance with relevant guidance, DHS could not reliably commit to when and how it would deliver a comprehensive exit solution or adequately monitor and manage its progress toward this end. We recommended that DHS ensure that an integrated master schedule be developed and maintained. DHS concurred and reported, as of July 2011, that the documentation of schedule practices and procedures was ongoing, and that an updated schedule standard, management plan, and management process that are compliant with schedule guidelines were under review. In the absence of a comprehensive biometric entry and exit system for identifying and tracking overstays, US-VISIT and CTCEU primarily analyze biographic entry and exit data collected at land, air, and sea ports of entry to identify overstays. In April 2011, we reported that DHS’s efforts to identify and report on visa overstays were hindered by unreliable data. Specifically, CBP does not inspect travelers exiting the United States through land ports of entry, including collecting their biometric information, and CBP did not provide a standard mechanism for nonimmigrants departing the United States through land ports of entry to remit their arrival and departure forms. Nonimmigrants departing the United States through land ports of entry turn in their forms on their own initiative. According to CBP officials, at some ports of entry, CBP provides a box for nonimmigrants to drop off their forms, while at other ports of entry departing nonimmigrants may park their cars, enter the port of entry facility, and provide their forms to a CBP officer. These forms contain information, such as arrival and departure dates, used by DHS to identify overstays. If the benefits outweigh the costs, a standard mechanism to provide nonimmigrants with a way to turn in their arrival and departure forms could help DHS obtain more complete and reliable departure data for identifying overstays. We recommended that the Commissioner of CBP analyze the costs and benefits of developing a standard mechanism for collecting these forms at land ports of entry, and do so to the extent that benefits outweigh the costs. CBP agreed with our recommendation and in September 2011 stated that it planned to complete a cost-effective independent evaluation of possible solutions and formulate an action plan based on the evaluation for implementation by March 2012. Further, we previously reported on weaknesses in DHS processes for collecting departure data, and how these weaknesses impact the determination of overstay rates. The 9/11 Act required that DHS certify that a system is in place that can verify the departure of not less than 97 percent of foreign nationals who depart through U.S. airports in order for DHS to expand the Visa Waiver Program. In September 2008, we reported that DHS’s methodology for comparing arrivals and departures for the purpose of departure verification would not inform overall or country-specific overstay rates because DHS’s methodology did not begin with arrival records to determine if those foreign nationals departed or remained in the United States beyond their authorized periods of admission. Rather, DHS’s methodology started with departure records and matched them to arrival records. As a result, DHS’s methodology counted overstays who left the country, but did not identify overstays who have not departed the United States and appear to have no intention of leaving. We recommended that DHS explore cost-effective actions necessary to further improve the reliability of overstay data. DHS concurred and reported that it is taking steps to improve the accuracy and reliability of the overstay data, by efforts such as continuing to audit carrier performance and working with airlines to improve the accuracy and completeness of data collection. Moreover, by statute, DHS is required to submit an annual report to Congress providing numerical estimates of the number of aliens from each country in each nonimmigrant classification who overstayed an authorized period of admission that expired during the fiscal year prior to the year for which the report is made. DHS officials stated that the department has not provided Congress annual overstay estimates regularly since 1994 because officials do not have sufficient confidence in the quality of the department’s overstay data—which is maintained and generated by US- VISIT. As a result, DHS officials stated that the department cannot reliably report overstay rates in accordance with the statute. In addition, in April 2011 we reported that DHS took several steps to provide its component entities and other federal agencies with information to identify and take enforcement action on overstays, including creating biometric and biographic lookouts—or electronic alerts—on the records of overstay subjects that are recorded in databases. However, DHS did not create lookouts for the following two categories of overstays: (1) temporary visitors who were admitted to the United States using nonimmigrant business and pleasure visas and subsequently overstayed by 90 days or less; and (2) suspected in-country overstays who CTCEU deems not to be a priority for investigation in terms of being most likely to pose a threat to national security or public safety. Broadening the scope of electronic lookouts in federal information systems could enhance overstay information sharing. In April 2011, we recommended that the Secretary of Homeland Security direct the Commissioner of Customs and Border Protection, the Under Secretary of the National Protection and Programs Directorate, and the Assistant Secretary of Immigration and Customs Enforcement to assess the costs and benefits of creating biometric and biographic lookouts for these two categories of overstays. Agency officials agreed with our recommendation and have actions under way to address it. For example, agency officials stated that they have met to assess the costs and benefits of creating lookouts for those categories of overstays. This concludes my prepared testimony statement. I would be pleased to respond to any questions that members of the Subcommittee may have. For further information regarding this testimony, please contact Richard M. Stana at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional relations and Public Affairs may be found on the last page of this statement. Individuals who made key contributions to this testimony are Rebecca Gambler, Acting Director; Anthony Moran, Assistant Director; Kathryn Bernet, Assistant Director; Jeffrey Baldwin-Bott; Frances Cook; Kevin Copping; and Taylor Matheson. Visa Waiver Program: DHS Has Implemented the Electronic System for Travel Authorization, but Further Steps Needed to Address Potential Program Risks. GAO-11-335. (Washington, D.C., May 5, 2011). Overstay Enforcement: Additional Mechanisms for Collecting, Assessing, and Sharing Data Could Strengthen DHS’s Efforts but Would Have Costs. GAO-11-411. (Washington, D.C., April 15, 2011). Visa Waiver Program: Actions Are Needed to Improve Management of the Expansion Process, and to Assess and Mitigate Program Risks. GAO-08-967. (Washington, D.C., Sep 15, 2008). Border Security: State Department Should Plan for Potentially Significant Staffing and Facilities Shortfalls Caused by Changes in the Visa Waiver Program. GAO-08-623. (Washington, D.C., May 22, 2008). Border Security: Stronger Actions Needed to Assess and Mitigate Risks of the Visa Waiver Program. GAO-06-854. (Washington, D.C., Jul 28, 2006). Overstay Tracking: A Key Component of Homeland Security and a Layered Defense. GAO-04-82. (Washington, D.C., May 21, 2004). Border Security: Implications of Eliminating the Visa Waiver Program. GAO-03-38. (Washington, D.C., Nov 22, 2002). This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Homeland Security (DHS) manages the Visa Waiver Program, which allows nationals from 36 member countries to apply for admission to the United States as temporary visitors for business or pleasure without a visa. From fiscal year 2005 through fiscal year 2010, over 98 million visitors were admitted to the United States under the Visa Waiver Program. During that time period, the Department of State issued more than 36 million nonimmigrant visas to other foreign nationals for temporary travel to the United States. DHS is also responsible for investigating overstays--unauthorized immigrants who entered the country legally (with or without visas) on a temporary basis but then overstayed their authorized periods of admission. The Implementing Recommendations of the 9/11 Commission Act of 2007 (9/11 Act) required DHS, in consultation with the Department of State, to take steps to enhance the security of the program. This testimony is based on GAO reports issued in September 2008, April 2011, and May 2011. As requested, it addresses the following issues: (1) challenges in the Visa Waiver Program, and (2) overstay enforcement efforts. GAO has reported on actions that DHS has taken in recent years to improve the security of the Visa Waiver Program; however, additional risks remain. In May 2011, GAO reported that DHS implemented the Electronic System for Travel Authorization (ESTA), required by the 9/11 Act, and took steps to minimize the burden associated with this new program requirement. DHS requires applicants for Visa Waiver Program travel to submit biographical information and answers to eligibility questions through ESTA prior to travel. In developing and implementing ESTA, DHS made efforts to minimize the burden imposed by the new requirement. For example, although travelers formerly filled out a Visa Waiver Program application form for each journey to the United States, ESTA approval is generally valid for 2 years. However, GAO reported that DHS had not fully evaluated security risks related to the small percentage of Visa Waiver Program travelers without verified ESTA approval. In 2010, airlines complied with the requirement to verify ESTA approval for almost 98 percent of Visa Waiver Program passengers prior to boarding, but the remaining 2 percent--about 364,000 travelers--traveled under the program without verified ESTA approval. In May 2011, GAO reported that DHS had not yet completed a review of these cases to know to what extent they pose a risk to the program and recommended that it establish timeframes for regular review. DHS concurred and has since established procedures to review a sample of noncompliant passengers on a quarterly basis. Further, to meet 9/11 Act requirements, DHS requires that Visa Waiver Program countries enter into three information-sharing agreements with the United States; however, only 21 of the 36 countries had fully complied with this requirement as of November 2011, and many of the signed agreements have not been implemented. DHS, with the support of interagency partners, has established a compliance schedule requiring the remaining member countries to finalize these agreements by June 2012. Moreover, DHS, in coordination with the Departments of State and Justice, has developed measures short of termination that could be applied on a case-by-case basis to countries not meeting their compliance date. Federal agencies take actions against a small portion of the estimated overstay population, but strengthening planning could improve overstay enforcement. ICE's Counterterrorism and Criminal Exploitation Unit (CTCEU) is the lead agency responsible for overstay enforcement. CTCEU arrests a small portion of the estimated 4 to 5.5 million overstays in the United States because of, among other things, competing priorities, but ICE expressed an intention to augment its overstay enforcement resources. From fiscal years 2006 through 2010, ICE reported devoting about 3 percent of its total field office investigative hours to CTCEU overstay investigations. ICE was considering assigning some responsibility for noncriminal overstay enforcement to its Enforcement and Removal Operations (ERO) directorate, which apprehends and removes aliens subject to removal from the United States. In April 2011, GAO reported that by developing a time frame for assessing needed resources and using the assessment findings, as appropriate, ICE could strengthen its planning efforts. DHS concurred and stated that ICE planned to identify resources needed to transition this responsibility to ERO as part of its fiscal year 2013 resource planning process. GAO made recommendations in prior reports for DHS to, among other things, strengthen plans to address certain risks of the Visa Waiver Program and for overstay enforcement efforts. DHS generally concurred with these recommendations and has actions planned or underway to address them.
Without meaningful reform, the long-term financial outlook for Medicare is bleak. Together, Hospital Insurance (HI) and Supplementary Medical Insurance (SMI) expenditures are expected to increase dramatically, rising from about 12 percent in 1999 to about a quarter of all federal revenues by mid-century, even without adding to the benefit package. Over the same time frame, Medicare’s expenditures are expected to double as a share of the economy, from 2.5 to 5.3 percent, as shown in figure 1. The progressive absorption of a greater share of the nation’s resources for health care, like Social Security, is in part a reflection of the rising share of the elderly population, but Medicare growth rates also reflect the escalation of health care costs at rates well exceeding general rates of inflation. Increases in the number and quality of health care services have been fueled by the explosive growth of medical technology. Moreover, the actual costs of health care consumption are not transparent. Third-party payers generally insulate consumers from the cost of health care decisions. In traditional Medicare, for example, the impact of the cost- sharing provisions designed to curb the use of services is muted because about 80 percent of beneficiaries have some form of supplemental health care coverage (such as Medigap insurance) that pays these costs. For these reasons, among others, Medicare represents a much greater and more complex fiscal challenge than even Social Security over the longer term. When viewed from the perspective of the entire budget and the economy, the growth in Medicare spending will become progressively unsustainable over the longer term. Our updated budget simulations show that to move into the future without making changes in the Social Security, Medicare, and Medicaid programs is to envision a very different role for the federal government. Assuming, for example, that the Congress and the President adhere to the often-stated goal of saving the Social Security surpluses, our long-term model shows a world by 2030 in which Social Security, Medicare, and Medicaid increasingly absorb available revenues within the federal budget. Under this scenario, these programs would require more than three-quarters of total federal revenue. (See fig. 2.) Budgetary flexibility would be drastically constrained and little room would be left for programs for national defense, the young, infrastructure, and law enforcement. *The “Eliminate non-Social Security surpluses” simulation can only be run through 2066 due to the elimination of the capital stock. Revenue as a share of GDP during the simulation period is lower than the 1999 level due to unspecified permanent policy actions that reduce revenue and increase spending to eliminate the non-Social Security surpluses. Medicare expenditure projections follow the Trustees’ 1999 intermediate assumptions. The projections reflect the current benefit and financing structure. When viewed together with Social Security, the financial burden of Medicare on future taxpayers becomes unsustainable, absent reform. As figure 3 shows, the cost of these two programs combined would nearly double as a share of the payroll tax base over the long term. Assuming no other changes, these programs would constitute an unimaginable drain on the earnings of our future workers. While the problems facing the Social Security program are significant, Medicare’s challenges are even more daunting. To close Social Security’s deficit today would require a 17 percent increase in the payroll tax, whereas the HI payroll tax would have to be raised 50 percent to restore actuarial balance to the HI trust fund. This analysis, moreover, does not incorporate the financing challenges associated with the SMI and Medicaid programs. The elements of restructuring of Medicare as proposed by the President and Breaux-Frist are best understood in light of Medicare’s current structure. From the perspective of the program’s benefit package, most beneficiaries have two broad choices: they can receive health care coverage through Medicare’s traditional fee-for-service program or through its managed care component, called Medicare+Choice. The latter consists of an array of private health plans whose availability to Medicare beneficiaries varies by county across the nation. The choice between traditional Medicare and a Medicare+Choice plan typically involves certain trade-offs related to selection of providers, services covered, and out-of-pocket costs. Another key difference pertains to program payment methods. Providerchoice. Under traditional Medicare, beneficiaries may obtain covered services from any physician, hospital, or other health care provider that receives Medicare payments. Because most providers accept Medicare payments, beneficiaries have virtually unlimited choice. In contrast, beneficiaries in managed care face a more restricted list of providers. Private plan enrollees can generally use only their plan’s network of doctors, hospitals, or other providers for nonemergency care. Servicesoffered. Although offering less provider choice, Medicare+Choice plans typically cover more services. For example, Medicare+Choice plans often cover routine physicals, outpatient prescription drugs, and dental care—services that traditional Medicare does not cover. Out-of-pocketcosts.Out-of-pocket costs are generally higher for beneficiaries in traditional Medicare than for those in Medicare+Choice. Traditional Medicare, which has a two-part benefit package, does not pay the full costs of most covered services. Part A has no premium and helps pay for hospitalization, skilled nursing facility care, some home health care, and hospice care. Part B, which is optional in traditional Medicare, requires a monthly premium ($45.50) and helps pay for physician services, clinical laboratory services, hospital outpatient care, and certain other medical services. In addition to the monthly premium, beneficiaries are responsible for an annual $100-deductible and for 20 percent of the Medicare-approved amount for most part B services. To cover these out- of-pocket expenses, many beneficiaries purchase private supplemental insurance, known as Medigap, or may have similar insurance through a former employer. In contrast, beneficiaries covered through a Medicare+Choice plan are required to pay part B premiums but often do not pay the plan a monthly premium or pay a monthly fee that is less than the cost of an equivalent Medigap policy. Plan enrollees may also pay a copayment for each visit or service. Programpayments.Another key difference between traditional Medicare and Medicare+Choice involves the program’s payment methods. In traditional Medicare, hospitals, physicians, and other providers receive a separate payment for each covered medical service or course of treatment provided. In contrast, Medicare+Choice plans receive a fixed monthly amount for each beneficiary they enroll, commonly known as a capitation payment. This amount covers the expected costs of all Medicare part A and part B services. If Medicare’s payment is projected to result in a plan’s earning above normal profits—that is, above the rate of return earned on its commercial contracts—the plan generally must use the excess to fund additional benefits. If the extra benefits—such as prescription drugs and lower cost-sharing— provided by Medicare+Choice plans resulted exclusively from efficiencies achieved by the plans, there would be no cause for taxpayers to be concerned. However, evidence shows that, because of flaws in Medicare’s methodology for computing payments, payments to plans are too high and plans turn these excess payments into extra benefits to attract beneficiaries. Instead of producing program savings as originally envisioned, Medicare’s managed care option has added substantially to program spending. Nevertheless, as we reported last year, program savings and extra benefits for Medicare beneficiaries are not mutually exclusive goals.According to their own data, many plans could make a normal profit and provide enhanced benefit packages, even if Medicare payments were reduced. However, to lower program spending would require a better method of adjusting plan payments for differences in the health status of beneficiaries, a process commonly known as risk adjustment. Medicare’s current risk adjustment methodology cannot adequately account for the fact that, on average, beneficiaries in Medicare+Choice are healthier than those in traditional Medicare. Extensive research and development over the past 10 years have led to new prescription drug therapies and improvements over existing therapies. In some instances, new medications have expanded the array of conditions and diseases that can be treated effectively. In other cases, they have replaced alternative health care interventions. For example, new medications for the treatment of ulcers have virtually eliminated the need for some surgical treatments. As a result of these innovations, the importance of prescription drugs as part of health care has grown. However, new drug therapies have also contributed to a significant increase in drug spending as a component of health care costs. The Medicare benefit package, largely designed in 1965, provides virtually no coverage. This does not mean, however, that all Medicare beneficiaries lack coverage for prescription drug costs. In 1996, almost one third of beneficiaries had employer-sponsored health coverage, as retirees, that included drug benefits. More than 10 percent of beneficiaries received coverage through Medicaid or other public programs. To protect against drug costs, the remainder of Medicare beneficiaries can choose to enroll in a Medicare+Choice plan with drug coverage if one is available in their area or purchase a Medigap policy. The burden of prescription drug costs falls most heavily on the Medicare beneficiaries who lack drug coverage or who have substantial health care needs. Drug coverage is less prevalent among beneficiaries with lower incomes. In 1995, 38 percent of beneficiaries with income below $20,000 were without drug coverage, compared to 30 percent of beneficiaries with higher incomes. Additionally, the 1995 data show that drug coverage is slightly higher among those with poorer self-reported health status. At the same time, however, beneficiaries without drug coverage and in poor health had drug expenditures that were $400 lower than the expenditures of beneficiaries with drug coverage and in poor health. This might indicate access problems for this segment of the population. Even for beneficiaries who have drug coverage, the extent of the protection it affords varies, and there are signs that this coverage could be eroding. The value of a beneficiary’s drug benefit is affected by the benefit design, including cost-sharing requirements and benefit limitations. Although reasonable cost sharing serves to make the consumer a more prudent purchaser, copayments, deductibles, and annual coverage limits can reduce the value of drug coverage to the beneficiary. Recent trends of declining employer coverage and more stringent Medicare+Choice benefit limits suggest that the proportion of beneficiaries without effective protection may grow. Expanding access to more affordable prescription drugs could involve either subsidizing prescription drug coverage or allowing beneficiaries access to discounted pharmaceutical prices. The design of a drug coverage option, that is, the scope of the benefit, the targeted population, and the mechanisms used to contain costs, as well as its implementation, will determine the option’s effect on beneficiaries, Medicare or federal spending, and the pharmaceutical market. Any option would need to consider how to balance competing concerns about the sustainability of Medicare, federal obligations, and the hardship faced by some beneficiaries. the President’s Plan And The Breaux-Frist Proposal Are Similar In Three Key Areas But Contain Two Major Differences. To Varying Degrees, Both Proposals introduce a competitive premium model, similar in concept to the Federal Employees Health Benefit Program (FEHBP), to achieve cost efficiencies; preserve the traditional fee-for-service Medicare program with enhanced opportunities to adopt prudent purchasing strategies; and modernize Medicare’s benefit package by making coverage available for prescription drug and catastrophic Medicare costs. The proposals differ, however, in the extent to which traditional Medicare could face competitive pressure from private plans. In addition, under the President’s plan, the Health Care Financing Administration (HCFA) would administer the program, whereas under the Breaux-Frist proposal, an independent Medicare board would perform that function. An elaboration of these points helps explain where the two proposals share common ground and where they diverge. Currently, Medicare follows a complex formula to set payment rates for Medicare+Choice plans, and plans compete primarily on the richness of their benefit packages. Efficient plans that reduce costs below the fixed payment amount can use the “savings” to enhance their benefit packages, thus attracting additional members and gaining market share. Although competition among Medicare plans may produce advantages for beneficiaries, the government reaps no savings. In contrast, the competitive premium approach included in the Breaux- Frist and President’s proposals offers certain advantages. Under either version, beneficiaries can better see what they and the government are paying for. In addition, plans that can reduce costs can lower premiums and attract more enrollees. As the more efficient plans gain market share, the government’s spending on Medicare will decrease. Fundamentally, this approach is intended to spur price competition. Instead of administratively setting a payment amount and letting plans decide—subject to some minimum requirements—the benefits they will offer, plans would set their own premiums and offer a common Medicare benefit package. Under both proposals, beneficiaries would generally pay a small portion of the premium and the government would pay the rest. Plans that operate at lower cost could reduce premiums, attract beneficiaries, and increase market share. Beneficiaries who joined these plans would enjoy lower out-of-pocket expenses. Taxpayers, however, would also benefit from the competitive forces. As beneficiaries migrated to lower cost plans, the average government payment would fall. (See table 1.) One major difference between the two proposals concerns how the beneficiary premium would be set for those who remained in the traditional fee-for-service program. Under Breaux-Frist, there would be no separate part B premium. All plans—including traditional Medicare— would calculate a total premium expected to cover the cost of providing Medicare-covered services to the average beneficiary. The maximum government contribution would be based on a formula. Beneficiaries would pay no premiums if they chose plans costing 85 percent or less than the national enrollment-weighted average premium. For plans with higher premiums, beneficiaries would pay an increasing portion of the premium. The traditional fee-for-service Medicare program would be regarded as one more plan. The monthly amount beneficiaries would pay to enroll in it, therefore, would depend on how expensive it was relative to the private plans. In contrast, under the President’s proposal, the beneficiary premium for traditional Medicare—the part B premium—would continue to be set administratively. As under Breaux-Frist, all other plans would submit competitive premiums. The maximum government contribution to private plans would be set at 96 percent of the average spending per-beneficiary in traditional Medicare. Beneficiaries who joined plans that cost less than that amount would pay reduced, or no, part B premiums. Beneficiaries who joined more expensive plans would pay higher part B premiums. Some believe the design of the President’s proposal would tend to insulate the traditional fee-for-service program, and those beneficiaries that remain in it, from market forces. At least in the short run, however, the practical differences between the President’s proposal and the Breaux-Frist proposal may be small. Because the vast majority of beneficiaries are enrolled in the traditional fee-for-service program, the national average premium under Breaux-Frist would, in all likelihood, largely reflect the cost of traditional Medicare. Table 2 presents a hypothetical example to illustrate how similar beneficiary and government contributions would be under Breaux-Frist and the President’s proposal. It assumes private plans could provide Medicare-covered benefits for 90 percent of the cost incurred in the traditional fee-for-service program and that they enroll 17 percent of all beneficiaries (the percentage of beneficiaries currently enrolled in private plans).In this example, beneficiaries in private plans would pay slightly less under the Breaux-Frist proposal compared to their contribution under the President’s proposal. Beneficiaries in the traditional program would pay slightly more under Breaux-Frist. Over the longer term, larger differences will emerge only if private plans decide to compete aggressively on the basis of price for market share or traditional fee-for-service Medicare becomes significantly less able to control the growth of costs relative to private plans. Although the premium support proposals are intended to slow health care spending through competition, it is not certain that this will occur. Private plans may very well find that their most profitable strategy is to “shadow price” (set prices only slightly under) traditional Medicare and be satisfied with smaller market share. (Paradoxically, serving larger numbers of beneficiaries could lead to higher costs and less profit.) The greater ability of private plans to control cost growth and thereby offer significantly lower premiums is not a foregone conclusion. Medicare’s fee-for-service cost containment record over the longer term has not differed substantially from that of the private sector. In some periods, Medicare’s cost growth has been lower; in others, higher. Today, actually, we are witnessing a resurgence of cost growth in private plans, while Medicare spending projections have flattened. More than 80 percent of Medicare beneficiaries currently receive their health care coverage through the traditional fee-for-service program. Both leading reform proposals recognize the importance of this program to beneficiaries and would ensure its continued availability nationwide. They also recognize that controlling the growth of overall Medicare spending requires a more efficient traditional program. Consequently, both proposals seek to make Medicare a more prudent purchaser of health care by introducing modern cost control strategies. The President’s proposal outlines several new approaches to controlling costs. It would, for example, allow the Secretary of Health and Human Services to contract with preferred provider organizations (PPO), negotiate discounted payment rates for specific services, and develop systems to manage the care (in a fee-for-service setting) of certain diseases or beneficiaries. The proposal would also adjust payments to providers and change beneficiary cost sharing requirements. Adopting these changes will entail considerable challenges given the sheer size of the Medicare program, its complexity, and the need for transparent policies in a public program. Moreover, how much the changes would save is uncertain and likely depends on how, and to what extent, these measures are implemented. For example, without supplemental insurance reform, a PPO option may not attract many beneficiaries because a majority have first-dollar coverage through supplemental policies and thus are insensitive to provider charges. Furthermore, cuts in provider payments are certain to meet with fierce opposition. The Breaux-Frist proposal provides a vehicle to reform traditional Medicare, but does not suggest specific cost control devices. The proposal calls for HCFA to prepare an annual business plan, which would outline intended payment and management strategies, describe partnership arrangements with entities to provide prescription drug benefits, and recommend benefit improvements. It would also include any legislative specifications necessary to enact the plan. Until 2008, HCFA would need explicit congressional approval to implement its business plan. After that, the plan would take effect without Congress’ explicit approval. Clearly, the Breaux-Frist proposal could increase HCFA’s options for managing the traditional program and controlling spending. Like the President’s proposal, however, the extent of its success will depend on the specific details and other reform elements that HCFA designs and the Congress allows to be adopted. The leading proposals include provisions for two commonly discussed benefit expansions: an outpatient prescription drug benefit and coverage for extraordinary out-of-pocket expenses, known as catastrophic or stop- loss coverage. In this regard, Breaux-Frist and the President’s proposal share many similarities. (See table 3.) Under both proposals the coverage is voluntary, although income-targeted subsidies are provided to encourage the purchase of prescription drug coverage. By making the drug benefit financially attractive, the proposals seek to maximize participation and avoid “adverse selection” problems—that is, having only high- cost beneficiaries purchase coverage and driving up premium costs. Low- income beneficiaries would pay nothing for the drug benefit, while those earning more would pay up to 75 percent of the cost. To further minimize adverse selection problems, the President’s proposal includes, and Breaux-Frist considers, a provision limiting opportunities to select drug coverage. Under Breaux-Frist, all participating health care organizations—including HCFA—would be required to offer a high option plan that provided prescription drug and stop-loss coverage, in addition to coverage for Medicare core benefits. The President’s proposal calls for a new voluntary prescription drug benefit, known as part D, and a new Medigap policy that would feature increased cost-sharing and stop-loss coverage. Under both proposals HCFA would contract with private entities to provide drug coverage for beneficiaries enrolled in its high option plan (Breaux-Frist) or in Medicare part D (President). Entities that managed the drug benefit for HCFA or private plans would be permitted to use cost containment mechanisms, such as formularies. The President’s proposal includes incentives for private employers to retain drug coverage for their retirees. The challenge of implementing Medicare reforms must be respected. As we have noted before, to determine the likely impact of a particular policy, details matter. Design choices and implementation policies can affect the success of proposed reforms. In addition, because difficult choices tend to meet with opposition from affected parties, the will to stay the course is equally important for successful reform. Following are just a few of the issues germane to Medicare reform that remind us of the proverb, “The devil is in the details.” For proposals that include elements of premium support, the task of determining the government’s contribution toward each plan’s premium raises several technical issues that have profound policy implications. In general, the government’s share is greater or smaller, depending on whether the plan’s premium is below or above the average of all plan premiums. However, some plans can incur higher-than-average expenses because of local market conditions outside of their control. Unless the government contribution is adjusted for these circumstances, beneficiaries could face higher out-of-pocket costs and plans could be at a competitive disadvantage. The Breaux-Frist proposal allows adjustments for medical price variation only. The President’s proposal allows adjustments for medical price variation and regional differences in medical service use. An adjustment for differences in local medical prices is clearly desirable under a premium support system. Without it, beneficiary premiums in high-price areas will tend to be above the national average. Adjusting the government contribution for input price differences can help ensure fair price competition between local and national plans and avoid having beneficiaries pay a higher premium, or higher share of a premium, simply because they live in a high-price area. In addition, the use of medical services varies dramatically among communities because of differences in local medical practices. Under premium support approaches, plan premiums in high-use areas will likely exceed the national average. Whether, or to what extent, to adjust the government contribution for this outcome is a matter of policy choice. On the one hand, without an adjustment, beneficiaries living in high-use areas who join local private plans could face substantial out-of-pocket costs for circumstances outside of their control. Consequently, private plans in these areas might have difficulty competing with a traditional Medicare plan that charged a fixed national premium based on an overall average of medical service use. On the other hand, there have been longstanding concerns about unwarranted variations in medical practice. By not adjusting the government contribution for utilization differences, financial pressures could encourage providers to reduce inappropriate levels of use. Under either leading proposal, Medicare’s administrative functions will include the oversight of plans’ contracts. In today’s Medicare+Choice program, this function is performed by HCFA. Under the President’s plan, HCFA would retain this function; under Breaux-Frist, a quasi-independent board would administer Medicare. Whatever the administrative entity is under Medicare reform, the following are questions that policymakers will want to consider. First, how will this entity’s mission be defined? Will the emphasis be on controlling costs, protecting beneficiaries, maximizing choice, or some combination of these goals? Policy choices would flow from the stated mission. Second, how much independence would be permitted to the administrative entity to carry out its mission? Would it be appropriately shielded from the pressure exerted by special interest groups? Third, how would the administrative entity hold plans accountable for meeting Medicare standards? Would it rely chiefly on public accountability, in which the process and procedures for compliance are clearly defined and actively monitored, or on market accountability, by providing comparative information on competing plans and letting beneficiary enrollment choices weed out poor performers? Answers to these questions will determine, to a large extent, whether a restructured Medicare program will be administered effectively. Experiences in the Medicare+Choice program suggest lessons for implementing reforms effectively and provide a blueprint for actions that can be taken right away. In response to challenges faced in administering Medicare+Choice, HCFA has several initiatives underway that have faltered for various reasons—including resistance by special provider interests and insufficient agency capacity and expertise. However, the need to further these initiatives will grow in importance under comprehensive reform. Specifically, (1) improved risk adjustment is needed to ensure that Medicare’s payments are fair both to the taxpayer and to individual plans, (2) better consumer information is needed to help beneficiaries make comparisons across plans, and (3) improved information systems and analysis capability are needed to promptly assess the impact of new payment and coverage policies. Adjusting for differences in beneficiary health status—commonly known as risk adjustment—enables plans to be fairly compensated when they enroll either healthier or sicker-than-average beneficiaries. Our work and that of others show that, partially because of an inadequate risk adjustment methodology, taxpayers have not benefited from the potential for capitated managed care plans to save money.Under the competitive premium approach, the ability to moderate Medicare spending rests in part on how accurately analysts determine the government’s share of a health plan’s premium. Today’s Medicare+Choice program is phasing in an interim risk-adjustment methodology based on the limited health status data currently available. The challenge, for Medicare+Choice or any premium-based reform proposal, is to implement an improved method that more accurately adjusts payments, does not impose an undue administrative burden on plans, and cannot be manipulated by plans seeking to inappropriately increase revenues. In an ideal market, informed consumers prod competitors to offer the best value. Our recent review of Medicare+Choice, however, showed that a lack of comparative consumer information dampened the program’s potential to capitalize on market forces to achieve cost and quality improvements.Despite HCFA’s review and approval of health plans’ marketing literature, many health plans distributed materials containing inaccurate or incomplete benefit information. Some plans did not furnish complete information on plan benefits and restrictions until after a beneficiary had enrolled. Others never provided full descriptions of benefits and restrictions. In addition, making comparisons across plans was difficult because, in the absence of common standards, plans chose their own format and terms to describe a plan’s benefit package. If Medicare is restructured to incorporate a competitive premium support approach, the need for beneficiaries to be well informed about their health care options becomes more critical. To guide its efforts to improve consumer information, HCFA should look to FEHBP—the choice-based health insurance program for federal employees. In FEHBP, for example, health plans are required to follow standard formats and use standard terms in their marketing literature. Informing Medicare beneficiaries, however, is likely to involve challenges not encountered in informing current and former federal employees. For one thing, the size of the Medicare program makes any education campaign a daunting task. Moreover, many beneficiaries have a poor understanding of the current program and may not understand how the proposed changes would affect their situations. The ability to provide prompt and credible policy analyses of newly introduced changes is key during a period of significant transformation. Recent experience with the bold payment reforms established in the Balanced Budget Act of 1997 (BBA) illustrates this point. BBA was enacted in response to continuing rapid growth in Medicare spending that was neither sustainable nor readily linked to demonstrated changes in beneficiary needs. In essence, BBA changed the financial incentives inherent in payment methods that, prior to BBA, did not reward providers for delivering care efficiently. Not surprisingly, affected provider groups conducted a swift, intense campaign to roll back the BBA changes. In the absence of solid, data-driven analyses, anecdotes were used to support contentions that Medicare payment changes were extreme and threatened providers’ financial viability. In testifying before the Congress in the fall of 1999, we remarked on the need for obtaining information that could identify and distinguish between desirable and undesirable consequences.More recently, we recommended that HCFA establish a process to assess the potential effects of implementing legislated Medicare changes.This process would entail developing baseline information from available claims data. The information from such assessments would be all the more critical during a period of implementing fundamental reforms. Given the aging of our society and the increasing cost of modern medical technology, it is inevitable that the demands on the Medicare program will grow. The President’s proposal reflects the belief that additional revenue will be necessary to meet those demands and ensure that health care coverage is provided to future generations of seniors and disabled Americans. Specifically, the President would earmark a portion of the projected non-Social Security surpluses for Medicare. According to the Administration, this action is designed to make Medicare financing a priority. This aspect of the proposal would entail a major change in program financing. While Medicare will inevitably grow, it must not grow out of control. The risk is that federal resources may not be available for other national priorities, such as education for young people and national defense. In response, both Breaux-Frist and the President’s proposals include elements designed to moderate future Medicare spending. Their approaches are untested, however, and it would be imprudent to adopt these—or any other reforms—without a means to monitor their effects. What is needed along with reform is a mechanism that will gauge spending and revenues and will sound an early warning if policy course corrections are warranted. Although both proposals include a warning mechanism, the Breaux-Frist approach would be a more comprehensive measure of program financing imbalances. Under the current Medicare structure, the program consists of two parts. Medicare’s HI Trust Fund, also known as part A, is financed primarily by payroll taxes paid by workers and employers. Supplementary Medical Insurance (SMI), also known as part B, is financed largely through general revenues. Currently, the financial health of Medicare is gauged by the solvency of the HI trust fund and not the imbalance between total revenues and total spending. The 1999 Trustees’ annual report showed that Medicare’s HI component has been, on a cash basis, in the red since 1992, and in fiscal year 1998, earmarked payroll taxes covered only 89 percent of HI spending. Although the Office of Management and Budget has recently reported a $12 billion cash surplus for the HI program in fiscal year 1999 due to lower than expected program outlays, the Trustees’ report issued in March 1999 projected continued cash deficits for the HI trust fund. (See fig. 4.) When the program has a cash deficit, as it did from 1992 through 1998, Medicare is a net claimant on the Treasury—a threshold that Social Security is not currently expected to reach until 2014. To finance these cash deficits, Medicare drew on its special issue Treasury securities acquired during the years when the program generated a cash surplus. In essence, for Medicare to “redeem” its securities, the government must raise taxes, cut spending for other programs, or reduce the projected surplus. When outlays outstrip revenues in the HI fund, it is tempting to shift some expenditures to SMI. Such cost-shifting extends the solvency of the HI Trust Fund, but does nothing to address the fundamental financial health of the program. Worse, it masks the problem and may cause fiscal imbalances to go unnoticed. For example, in 1997 BBA reallocated a portion of home health spending from the HI Trust Fund to SMI. This reallocation extended HI Trust Fund solvency but at the same time increased the draw on general revenues in SMI while generating little net savings. The President’s plan preserves the program’s divided financing structure and continues to rely on projections of HI Trust Fund solvency to warn of fiscal imbalances. By devoting a portion of the non-Social-Security surpluses to Medicare, the President’s plan would extend the HI Trust Fund’s solvency. This proposed infusion of general revenues represents a major departure in the financing of the HI program. Established as a payroll tax funded program, HI would now receive an explicit grant of funds from general revenues not supported by underlying payroll tax receipts. In effect, this grant would constitute a new claim on the general fund that would limit the ability to set budgetary priorities in the future. It would also further weaken the incomplete signaling mechanism of Medicare’s future fiscal imbalances provided by the HI Trust Fund solvency measure. Under an approach that would combine the two trust funds, a continued need would exist for measures of program sustainability that would signal potential future fiscal imbalance. Such measures might include the percentage of program funding provided by general revenues, the percentage of total federal revenues or gross domestic product devoted to Medicare, or program spending per enrollee. As such measures were developed, questions would need to be asked about the appropriate level of general revenue funding as well as the actions to be taken if projections showed that program expenditures would exceed the chosen level. The Breaux-Frist proposal would unify the currently separate HI and SMI trust funds, and, in so doing, would eliminate the ability to shift costs between two funding sources. The Breaux-Frist early warning mechanism consists of defining program insolvency as a year in which general revenue contributions exceed 40 percent of total Medicare expenditures. At that time, the Congress would have several choices. It could raise the limit on general revenue contributions, raise payroll taxes, raise beneficiary premiums, reduce benefits, cut provider payments, or introduce efficiencies to moderate spending. Supporters of the Breaux-Frist proposal have suggested that a more comprehensive measure of program financing would be more useful to policymakers. Current spending projections show that absent reform that addresses total program cost, this limit would be reached in less than 10 years. (See fig. 5.) These data underscore the need for reform to include appropriate measures of fiscal sustainability as well as a credible process to give policymakers timely warning when fiscal targets are in danger of being overshot. In determining how to reform the Medicare program, much is at stake— not only the future of Medicare itself but also assuring the nation’s future fiscal flexibility to pursue other important national goals and programs. Mr. Chairman, I feel that the greatest risk lies in doing nothing to improve the program’s long-term sustainability or, worse, in adopting changes that may aggravate the long-term financial outlook for the program and the budget. It is my hope that we will think about the unprecedented challenge facing future generations in our aging society. Relieving them of some of the burden of today’s financing commitments would help fulfill this generation’s fiduciary responsibility. It would also preserve some capacity to make their own choices by strengthening both the budget and the economy they inherit. While not ignoring today’s needs and demands, we should remember that surpluses can be used as an occasion to promote the transition to a more sustainable future for our children and grandchildren. I am under no illusions about how difficult Medicare reform will be. The President’s and Breaux-Frist proposals address the principal elements of reform, but many of the details need to be worked out. Those details will determine whether reforms will be both effective and acceptable—that is, seen as helping guarantee the sustainability and preservation of the Medicare entitlement, a key goal on which there appears to be consensus. Experience shows that forecasts can be far off the mark. Benefit expansions are often permanent, while the more belt-tightening payment reforms—vulnerable to erosion—could be discarded altogether. The bottom line is that surpluses represent both an opportunity and an obligation. We have an opportunity to use our unprecedented economic wealth and fiscal good fortune to address today’s needs but an obligation to do so in a way that improves the prospects for future generations. This generation has a stewardship responsibility to future generations to reduce the debt burden they will inherit, to provide a strong foundation for future economic growth, and to ensure that future commitments are both adequate and affordable. Prudence requires making the tough choices today while the economy is healthy and the workforce is relatively large. National saving pays future dividends over the long term but only if meaningful reform begins soon. Entitlement reform is best done with considerable lead time to phase in changes and before the changes that are needed become dramatic and disruptive. The prudent use of the nation’s current and projected budget surpluses combined with meaningful Medicare and Social Security program reforms can help achieve both of these goals. Mr. Chairman and Members of the Committee, this concludes my prepared statement. I will be happy to answer any questions you may have.
Pursuant to a congressional request, GAO discussed two leading proposals on Medicare reform: (1) the President's Plan to Modernize and Strengthen Medicare for the 21st Century; and (2) S. 1895, entitled the Medicare Preservation and Improvement Act of 1999, which is commonly referred to as the Breax-Frist proposal. GAO noted that: (1) the elements of restructuring of Medicare as proposed by the President and Breaux-Frist are best understood in light of Medicare's current structure; (2) from the perspective of the program's benefit package, most beneficiaries have two broad choices: they can receive health care coverage through Medicare's traditional fee-for-service program or through its managed care component, called Medicare Choice; (3) the choice between traditional Medicare and a Medicare Choice plan typically involves certain trade-offs related to selection of providers, services covered, and out-of-pocket costs; (4) the President's plan and the Breaux-Frist proposal are similar in three key areas but contain two major differences; (5) to varying degrees, both proposals: (a) introduce a competitive premium model, similar in concept to the Federal Employees Health Benefit Program, to achieve cost efficiencies; (b) preserve the traditional fee-for-service Medicare program with enhanced opportunities to adopt prudent purchasing strategies; and (c) modernize Medicare's benefit package by making coverage available for prescription drug and catastrophic Medicare costs; (6) the proposals differ, however, in the extent to which traditional Medicare could face competitive pressure from private plans; and (7) under the President's plan, the Health Care Financing Administration would administer the program, whereas under the Breaux-Frist proposal, an independent Medicare board would perform that function.
The Park Service is the caretaker of many of the nation’s most precious natural and cultural resources. Today, more than 100 years after the first national park was created, the national park system has grown to include 376 units, most of which were created individually through legislation or presidential proclamation. The national park system covers roughly 83 million acres of land and includes an increasingly diverse mix of sites. In fact, there are now 20 different categories of park units. The most common categories include (1) national parks, such as Grand Canyon in Arizona, (2) national historical parks, such as Independence in Pennsylvania, (3) national battlefields, such as Antietam in Maryland, (4) national historic sites, such as Ford’s Theatre in Washington, D.C., (5) national monuments, such as the Statue of Liberty in New York, (6) national preserves, such as Yukon-Charlie Rivers in Alaska, and (7) national recreation areas, such as Lake Mead in Arizona and Nevada. Figure 1 depicts the geographic dispersion and diversity of the units in the National Park System. The Park Service also operates servicewide programs from headquarters. For example, the Associate Director for Cultural Resources, Stewardship, and Partnerships operates the Cultural Resources Preservation Program, which provides funds for archeological, ethnographic, and historical research; the preparation of management studies, object cataloging, historic structure reports, and cultural landscape reports; and other research, planning, and data collection activities. The line of authority in the Park Service runs from the director and the deputy director to seven regional office directors. The function of the regional offices is to provide oversight and support of park operations. Regional directors directly supervise the performance of park superintendents in their regions. Regional offices also run programs for the parks in their regions. For example, regional offices run the Cyclic Maintenance Program, which provides funds for regularly scheduled preventive maintenance and preservation projects in the parks. In addition to the regional offices, the Park Service operates the Denver Service Center, which provides planning, design, and construction services for major park programs and complex projects for parks and the Harpers Ferry Interpretive Design Center, which provides the planning, design, and production of interpretive media for the parks. A variety of field resource centers, such as the North Atlantic Historic Preservation Center and the Southeast Archeological Center, also provide centralized technical services to parks. Below the regional office level, parks in the same geographic area are organized into one or more groups called clusters. Each park in the cluster is represented by its superintendent. Cluster members meet periodically to set priorities for park funding requests and to provide mutual operating support. For example, a manager of a park with a short-term resource need, such as for a specific technical skill or piece of equipment, could raise this issue at a cluster meeting and obtain temporary assistance from another park in the cluster. As table 1 shows, the account structure of the Park Service’s budget is organized around functional activities, such as operations, construction, and land acquisition. As figure 2 shows, most of the Park Service’s budget is for operations. For fiscal year 1997, the Park Service obligated about $1.7 billion. Of this, about $1.2 billion covered the cost of operating the national park system—including parks, headquarters, regional offices, programs, and certain service centers. Figure 3 breaks down the nearly 69 percent of the budget that goes to operations. About 70 percent of the operating funds go directly to the parks to cover the costs of their day-to-day operations. This operating budget is the primary funding source for any park and it is generally referred to as the park’s base budget. Parks compete within their regions or at the servicewide level for another 13 percent of the operating funds, which typically pay for one-time projects, such as those involving cyclic maintenance, ethnography, preservation of natural resources, or removal of hazardous waste. There are over 25 different sources of these project funds, each with its own criteria and application process. The Park Service’s detailed budget justification presents the budget in a variety of ways. The dominant presentation is by budget account, program activity, and subprogram activity, as shown in table 1. Each budget account has its own section justifying proposed changes in funding. In addition, the operating budget includes separate sections for each major function—resource stewardship, visitor services, maintenance, and park support. The account-level sections include other presentations for requested funds, such as by object class. Finally, the budget is presented according to the specific operating, construction, or other funds being requested for individual parks and programs. These park-by-park presentations, rather than the functional, object class, or performance presentations, generally serve as the basis for allocating appropriated funds to parks and programs and for controlling spending. Park entrance and recreation fees represent another source of funds for park units. Until recently, fee revenues collected offset rather than supplemented a park’s income. Recent legislation changed this treatment. The 1996 Recreation Fee Demonstration Program enabled 100 demonstration parks to raise park entrance or recreation fees and retain 80 percent of total fees as added budget authority. The remaining 20 percent of the fees would be allocated for areas, sites, or projects selected at the discretion of the agency head. Under the Results Act, the Park Service for the first time has introduced servicewide goals to be achieved by park managers. Both the Park Service and individual parks have prepared 5-year strategic plans and annual performance plans. Budget formulation has changed slightly by requiring parks to assign a servicewide goal to each request for increased funds. Park staff we interviewed said it was too soon to tell whether the strategic planning process will influence how budgets are executed—i.e., allocated and spent—at the park level. Although the Park Service has made some progress in aligning its budget structure and processes with its strategic plan, its efforts to link its plans and budgets have been hampered by the incompatibility between its activity-oriented budget and accounting systems and its goal-orientated strategic plan. Because park managers do not have systems to track spending according to goals, either dual systems or crosswalks will be required to communicate the relationship between resources and results. At the level of the individual parks, long-range and annual planning are not new. For nearly 20 years, each unit in the park system has been required to have a general management plan to guide the preservation and use of each unit over a 10- to 15-year period. The planning process includes consultation with the public to clearly define a park’s purpose and significance, set goals and objectives, identify desired future conditions, and evaluate alternatives. Parks are also required to have a resource management plan, which defines park objectives concerning both natural and cultural resources and documents the status of the resources and outlines actions to ensure their well-being; it is the blueprint for comprehensive management of a park’s resources. A park’s general management and resource management plans are intended to identify the basic facilities, staff, interpretive materials, and equipment needed to run the park in a manner consistent with both its enabling legislation and the purposes, goals, and objectives identified during the planning process. Therefore, the plans serve as the basis for funding requests, such as for new construction, land acquisition, base operating increases, or project funds. At the park level, annual planning closely follows the budget cycle. About 18 months prior to a fiscal year, staff in each park analyze their needs and develop requests for increases to the prior fiscal year’s budget for base operations. Park officials generally justify such increases as needed to carry out new or higher levels of ongoing operations, e.g., additional staff needed to run a new visitor center or to restore operations that were previously curtailed because past budgets did not keep pace with rising costs. Parks compete against one another for limited funds through the cluster, regional, and headquarters hierarchy. During the summer prior to the start of the fiscal year, staff in each park begin to plan how expected budget authority for base operations will be allocated to various park divisions, such as resources management, maintenance, or interpretation. The final base operating budget allocated to each park is the lump sum listed for each park in the Park Service’s budget justification, plus or minus any changes that were made during the appropriations process. The Park Service imposes no additional controls on how the base funds are spent. For example, the financial plans developed by park managers showing projected spending by object classes are for informational purposes only. Park managers have flexibility to obligate and spend the funds as needed for ongoing operations. However, as we previously reported, many park budgets are dominated by the pay and benefits costs associated with permanent staff. According to many Park Service officials we spoke with, this reduces the flexibility of park managers to reallocate resources in the short term. The fee demonstration program has resulted in a new budgetary process for the 97 projects selected to participate. Under the demonstration program, park proposals to spend added fee revenues have not been subject to competition or scrutiny during the budget formulation process. Rather, parks must submit proposed spending plans for estimated fee revenues to headquarters just prior to the budget year and may only spend fee funds on projects that have been approved. As we noted in our report on the way the Park Service sets and budgets for operational priorities, previous long-range and annual planning have not identified outcome-oriented park goals, results to be achieved, or the resources necessary to achieve those results. As a result, accountability in the Park Service has lacked a focus on the outcomes of park operations. Accountability for park outcomes is especially important for an agency like the Park Service, which has traditionally set priorities and developed budgets at the park level. Under this decentralized management structure, individual park managers can make decisions about park operations that may or may not be consistent with the agency’s mission, priorities, or goals. The nature of the Park Service’s mission and decentralized structure necessitated a phased, field-oriented implementation of the Results Act. We previously reported that the Park Service’s mission has dual objectives: to provide for the public’s enjoyment of the resources that have been entrusted to its care and to protect its natural and cultural resources so that they will be unimpaired for the enjoyment of future generations.Balancing these often competing objectives has long shaped the debate about how best to manage the national park system. The competing missions and decentralized management culture of the Park Service provided a challenging environment in which to introduce common, servicewide missions and goals as called for by the Results Act. To overcome these challenges, both headquarters and field staff were engaged in drafting and exchanging comments on early draft strategic plans. Park Service staff have characterized this approach as “diagonal” rather than “top-down” or “bottom-up”; all described this approach as both difficult and frustrating. However, the same staff commented that this approach was probably the only way to develop a servicewide plan that balanced the needs of both headquarters and field staff and therefore achieved a degree of acceptance and ownership by both groups. A key element of the Park Service’s approach to implementing the Results Act has been extensive field testing. During the summer of 1995, the Park Service undertook “prototype” exercises in strategic planning and performance measurement at six parks and three programs. The experiences of those prototype parks and programs along with help from planners at the Denver Service Center shaped the Park Service’s strategic planning process and led to the development of some initial performance measures. During fiscal year 1996, staff from a park or program from each cluster, known as lead parks, worked to refine the goals in the servicewide plan and the implementation process. The experience of the prototype and lead parks also led to the development of written guidance used to train staff at the rest of the parks during the first part of fiscal year 1997. A more detailed description of the Park Service’s approach to implementing the Results Act is contained in appendix II. Under the Results Act, the Park Service for the first time has developed a servicewide strategic plan stating the Park Service’s mission, mission goals, and outcome-oriented long-term goals that describe in measurable terms a desired future condition. The servicewide plan serves as the umbrella for plans developed by parks, programs, and offices. The long-term goals in the plan are to be achieved over a 5-year period from fiscal year 1998 to fiscal year 2002. The top portion of figure 4 depicts the structure of the Park Service’s strategic plan. Appendix III contains a more detailed description of the Park Service’s mission, mission goals, and long-term goals. By 2002, 25% of the 1997 identified park populations of federally listed and threatened species with critical habitat on park lands or requiring NPS recovery actions have an improved status . . . By 2002, 25% of the 1997 identified park populations of federally listed and threatened species with critical habitat on park lands or requiring NPS recovery actions have an improved status . . . In fiscal year 1998, restore 5% of discrete populations of CutthroatTrout based on locations identified in recovery plan . . . In fiscal year 1998, remove 5% exotic species; replace with 5% Cutthroat fingerlings. "Park" refers to any park unit, program, or office. If a servicewide long-term goal is applicable to a park, the goal must be included in the park's plan, although the percentage target may be changed. Parks may also develop their own additional long-term goals that reflect the priorities of their particular mission and mission goals. According to Park Service officials, by September 30, 1997, each park had prepared a long-term strategic plan that mirrored the goals in the servicewide plans. Parks whose activities contributed to any of the servicewide long-term goals were required to include those long-term goals in their strategic plans. Parks could also include long-term goals that were not in the servicewide plan, but still fit within the Park Service’s broad categories of goals. Park-level strategic plans cover the same 5-year period as the servicewide strategic plan. To link the strategic plans with park operations, parks also prepared annual performance plans that detail the specific performance goals for fiscal year 1998 and the activities, operating funds, and full time equivalent (FTE) staff needed to achieve the annual goals. Park Service officials told us that the first servicewide annual performance plan was created by summing across all parks the annual performance targets, such as the number of acres to be restored, for each long-term goal, along with the operating funds and FTEs associated with the annual performance target. The Park Service calculated servicewide percentage performance targets based on the park-level input. Figure 4 shows this process with an example of a goal—and the actions and resources needed to achieve that goal—taken from the strategic and annual performance plans of one of the parks we visited. The Results Act is based on the premise that budget decisions should be more clearly informed by expectations about program performance. The Park Service has taken initial steps to align its planning and budget processes. Under the Results Act, the Park Service’s process for formulating requests for base funding increases has changed slightly. Parks requesting a funding increase are now required to specify which long-term goal will be addressed by such an increase. Park Service guidance also suggests that parks justify their requests for funding increases by describing how the increase would enable the park to meet its goals. Therefore, as such requests are evaluated at the cluster, regional, and headquarters levels, reviewers will have additional information about the specific Park Service goal to which the request will contribute. At the time of our review, however, parks were not required to demonstrate the level of performance that could be achieved with the requested funds. As a first step toward bringing the agency’s budget presentations in line with its performance goals, the Park Service has developed an information system, called the Performance Management Data System (PMDS), which enables parks to enter their annual performance goals, along with the estimated funds and FTEs needed to achieve the goals, into a servicewide information system. PMDS is an internal Web site for the Park Service that provides users with information on the Results Act, including technical guidance, a list of the Park Service’s goals, data entry screens for performance and budget information, and the ability to generate various reports, such as performance and resource information by long-term goal, region, or park. For each goal, there is a data entry page on which parks enter long-term goal, baseline, and annual target information, along with the funds and FTEs needed to achieve the targets for fiscal years 1998, 1999, and 2000. During the initial transition period, parks entered data for fiscal year 1998 only. According to Park Service officials, out-year data was to be entered by staff in each park during March 1998 for the next budget cycle. Park Service officials said that it was too soon to tell whether the strategic planning process will influence how budgets are formulated and executed. Although the Park Service used PMDS data to produce its fiscal year 1999 annual performance plan, which was included in its fiscal year 1999 budget justification to Congress, the justifications remained focused on traditional functional and organizational budget presentations. Most park managers also said that it was too soon to evaluate the influence of strategic planning on resource decisions. For example, despite park managers’ relative freedom to spend base operating funds as needed within total allocations, some Park Service officials cited the relative inflexibility associated with budgets dominated by increasing personnel costs as a barrier to changing resource allocations to achieve performance goals. One park official said that the strategic and annual performance planning process will not likely lead to changes in resource decisions until parks begin to track performance against goals. To develop park-level and servicewide annual performance plans, the budget office instructed parks to assign only base operating funds to their annual goals. This decision was made because base operating funds are considered the only stable source of funding for purposes of planning over the 5-year period of the strategic plan. However, park-level officials commented that base operating funds are only one of a variety of sources of funding used to accomplish a park’s goals. Many parks also rely heavily on other sources of funding, including project funds, volunteer time, and concession payments to accomplish their goals. For example, the budget division chief observed that the servicewide strategic plan calls for parks to increase visitor satisfaction. A park manager may decide that fixing bathrooms will contribute to this goal, but may use project rather than base operating funds to pay for the repairs. Because project funds cannot be included in the park’s annual performance plan, this activity would be excluded from the park’s plan. According to the budget division chief, the problem with parks assigning project funds to their goals is that parks must compete for those funds annually. Since they cannot count on having those funds from one year to the next, they cannot confidently project 5 years forward how those funds will help them accomplish their long-term goals. Since headquarters program directors or regional directors control the allocation of project funds, these managers were responsible for assigning the funds and FTEs for each project to the servicewide goals on the basis of past uses of the funds at the park level. The budget division chief said that resolving this issue in a way that allows park managers to incorporate project funds into their plans may force the Park Service to require parks to submit prioritized requests for project funds at the same time they request increases to base funding (i.e., following the timetable for budget formulation). This would be a change from current practice, in which overall levels of project funding are formulated and requested in the President’s budget, but specific projects are not subsequently selected and placed in priority order until just prior to the budget year. Park Service officials told us that creating a direct link between planned and actual spending and the Park Service’s annual goals is complicated by the fact that the existing budget and accounting structures are activity-oriented and do not mesh well with the goal-orientation of the Park Service’s strategic plan. The activity orientation of the account structure can be seen in table 1. The current budget structure is also the foundation for the Park Service’s accounting system. As park and program managers execute their budgets, they set up accounts associated with maintenance, interpretation services, park support, etc., and use established codes to record the nature of the expense, e.g., trails and walks (maintenance), special interpretive programs (interpretation and educational programs), and administration (park support). Thus, park and program managers can use the budget and accounting systems to plan how much they intend to spend on an activity and then track actual spending against the plan. In the Park Service, these planning and budgeting systems are for park-level management purposes; for financial control purposes, the Park Service monitors spending only against total allocations to individual parks and projects. In contrast to the activity orientation of the budget and accounting systems, the Park Service’s goals describe desired future conditions—or outcomes—in the areas of resource protection, visitor services, preservation of resources through partnerships, and organizational effectiveness. PMDS captures estimated park and program spending by goal. It was not, however, designed to be an accounting system. Currently, park managers have no way to track or report how actual spending compared to planned spending by goal at the end of the fiscal year. Park Service staff told us that until the agency develops a system for linking its goals to its budget and accounting systems, parks will continue to produce two sets of books: one for planning purposes using data from PMDS and another for financial accountability and budget execution purposes using data from separate budget and accounting systems. As a potential fix for this duplication, the Park Service is in the process of modifying its park-based financial management system to allow parks to create a crosswalk between their budget projections and actual cost data and their strategic goals. According to a planning document prepared by the Park Service’s Accounting Operations Center, the crosswalk would allocate all of a park’s accounts in the budget execution system to the park’s goals. Park managers would determine the percentage of the budgeted and actual funds in a particular account that would be automatically allocated to each park goal. For example, a park could associate all funds in an existing road maintenance account to a single goal, such as enhancing the visitor experience, or it could split the funds among a variety of goals using set percentages, such as 50 percent for enhancing the visitor experience and 50 percent for preserving park resources. Park Service officials commented that, carried to its logical end, outcome-oriented management of the Park Service would require dramatic changes to the existing budget structure. The traditional activity-oriented budget structures would need to change to reflect the results or outcomes of the Park Service. However Park Service officials were concerned about such a change because of the associated disruption, costs, and loss of historical trend data. Park Service officials we spoke to emphasized that any change in the budget structure would need to involve extensive discussions with Congress to determine if any proposed changes would meet congressional needs for oversight and control. Some preliminary discussions have already occurred, but no changes were proposed in the fiscal year 1999 budget. In its report on the fiscal year 1998 Department of the Interior and Related Agencies Appropriations Bill, the House Committee on Appropriations suggested that “agencies examine their program activities in light of their strategic goals to determine whether any changes or realignments would facilitate a more accurate and informed presentation of budgetary information.” In response to this suggestion, the Park Service has developed a preliminary proposal to change its budget structure to achieve a better alignment with the goals in its strategic plan. This proposal is currently being reviewed internally by the Department of the Interior. The proposal would keep current budget accounts intact, but would replace existing program activities or subactivities within those accounts with new program activities or subactivities to reflect the Park Service’s major goals in the areas of resource protection, the visitor experience, and partnerships to conserve resources and provide recreation. Park Service officials identified several key aids to implementing results-oriented management in the Park Service. These included using a field-oriented approach to training and development of strategic plans, providing top management support, and introducing budget constraints. These officials also identified challenges, such as the difficulty of holding managers accountable for achieving park goals, developing appropriate measures for achieving goals, and linking planning systems to budget and accounting systems. The officials told us that, despite these challenges, the strategic planning process produced benefits, such as increased communication and resource sharing within parks and information about how resources were being allocated among goals. There was less agreement that the process had resulted in any major operational changes to date. Some park staff said the process confirmed that they were generally doing the right things; others said the process had led them to make changes to meet a goal they had identified or to question assumptions about park operations. Several park managers were hopeful that their strategic and annual performance plans would provide a more effective way to justify their budgets. The Park Service’s field-oriented approach, in which park managers participated in the development of the planning process, the servicewide plan, and, ultimately, the development of park-level plans, was essential to make these plans and processes meaningful to park staff and obtain their support. For example, the Park Service chose to pilot test the strategic planning process at 25 parks. Staff at one of those parks said they could see a lot of their work reflected in the final servicewide strategic plan. At a non-test park, staff said they benefited from the pilot approach by avoiding previously tested approaches that proved unworkable. Training developed by the Park Service was also cited as a key aid to implementing the Results Act at the park level. Staff commented that the guidance and hands-on form of training provided a good basis for staff to develop their own strategic plans. For example, one park manager said that, in addition to presentations at the annual superintendent’s conference and cluster meetings, the park’s management staff received 4 hours of training with the director of the Office of Strategic Planning (OSP) and a day of hands-on training with the regional office’s Results Act coordinator. The training sessions provided examples of mission statements and goals from other parks and definitions of terms. Among the handouts provided were copies of templates for strategic and annual work plans. The manager said that all this material was helpful in getting started. Another park manager commented that his staff could not have developed their strategic plan without the training. In that region, nearby parks formed groups and the staff from the region’s lead parks and the regional director’s office led 2-day workshops for each group. Staff said they made a few wrong turns in the beginning, but the training and feedback helped them get back on track. In addition to hands-on training, park managers reported that the Park Service training and guidance aided implementation by stressing that they should develop their park-level strategic plans in a way most useful for their particular operations within the overall framework of the servicewide plan. This approach increased the credibility and acceptance of the strategic planning process in individual parks. One park manager observed that most parks had a great deal of control over how the Results Act was applied in their parks and could focus on the elements of the servicewide plan that were most useful to them according to their unique values and needs. Top management support for implementing the Results Act by both regional directors and high-level headquarters managers aided implementation by emphasizing the importance of the effort. One park manager told us that, at a recent superintendent’s conference, the deputy director of the Park Service made it very clear during his presentation that the Results Act was a priority. Such a show of support made a positive difference in the attitudes of field staff toward the strategic planning process. Another park manager commented that his regional director issued strongly worded guidance to park superintendents supporting implementation of the Results Act. The regional director said she wanted park managers to be personally involved in making presentations to her about their strategic plans. This led the park manager to take the effort more seriously and to work intensively with division chiefs as a team to develop their plan. Conversely, some headquarters managers have had less involvement with Results Act implementation than park or regional officials. According to one headquarters official, mid-level managers at headquarters have not had the same degree of training or direct experience developing strategic plans as park managers since the Park Service’s implementation of the Results Act has been primarily field-oriented to date. As a result, there is more cynicism concerning the Results Act among mid-level managers at headquarters than in the field. This official said that if they had to do this again, headquarters staff would have received the same orientation and training that park and regional staff received. In addition to visible top management support, centralized support and guidance provided by OSP and regional office coordinators were important aids to developing park-level plans. For example, several park managers said that the training manual developed by OSP, entitled Field Guide to the Government Performance and Results Act (GPRA) and Performance Management, was timely and excellent, although elements of the guide quickly became outdated. Park managers also found OSP’s brief summary guide, entitled GPRA on the Go, to be a concise and useful synopsis of how to implement the Results Act. Several park managers noted that as they were rushing to complete their plans, central guidance changed frequently. However, managers also commended OSP’s use of information technology, such as a computer bulletin board for posting guidance, questions, and answers. One park official said that he had received over 100 electronic mail messages from the regional office and headquarters providing directions, examples, and suggestions on implementing the Results Act. Finally, park officials suggested that a team approach to developing strategic and annual performance plans resulted in better plans with more buy-in from participants. In the parks we visited, we generally found that key management staff, such as the superintendent, assistant superintendent, division chiefs, and financial managers were personally involved in developing the strategic and performance plans, although the extent of the involvement varied. A park manager also noted that the use of trained facilitators from parks with more strategic planning experience to assist staff at less experienced parks had been very helpful. According to headquarters and regional office officials, the requirement that parks estimate the budgetary resources associated with each goal aided strategic planning by making the exercise more concrete to park staff. Headquarters and regional office officials saw unambiguous benefits from linking park resources to their plans. For example, one headquarters official commented that making a connection between Park Service goals and resources is essential to make the Results Act work. Without this connection, park staff would view the effort as a paperwork exercise. A regional office manager added that the resource assessment phase of the strategic planning process was critical as a reality check on goals. Parks had to answer the question: Can these goals be realistically achieved with existing resources? If not, goals were adjusted. Another headquarters official thought that the park strategic plans would minimize arguments about budget priorities in the parks—the goals in each park’s plan now represent the park’s priorities for the next 5 years. Park officials also mostly agreed with the importance of the budgetary link. For example, the requirement to assume constant, inflation-adjusted resources over the 5-year period of the strategic plan led a number of parks to develop more realistic goals after considering budgetary constraints. For example, officials at one park proposed a boundary study to address the protection of historical resources currently located outside the park’s boundaries, which had been arbitrarily established. However, staff decided to scale back their initial plans because a sufficient level of funding would not be available without a budgetary increase. This contrasts with an experience described to us prior to implementing the Results Act, in which managers described a long-term management plan they developed that was not realistic because there was no way to achieve the goals in the plan without additional funding. While generally agreeing about the importance of a budgetary link for internal planning purposes, several park officials had concerns about how such data would be used by the Park Service or by Congress to make budgetary decisions. Some questioned whether estimated spending for each goal was sufficiently precise to be used to challenge a park’s budget. Park officials had fewer concerns about using budget data at the park level to aid internal decision-making. Such views are not unique in the executive branch. In our recent report on the use of performance information in the budget process, executive branch officials we interviewed said that the principal value of the Results Act was internal and management oriented, stemming from its ability to clarify missions and performance expectations. They also said that current budgetary pressures and apprehension about the use of Results Act information could increase levels of defensiveness among agency staff. The Park Service’s intention to hold upper-level park managers accountable for achieving the goals in their strategic and annual performance plans lent greater importance to the strategic planning effort, but it raised concerns for some park managers. Headquarters officials generally agreed that holding managers accountable for achieving their goals was important for successful implementation of the Results Act. For example, one official said that holding managers accountable for results would have a particularly strong influence if managers believed that the results would affect their performance evaluations and budgets. However, at the park level, officials from half of the eight parks we interviewed agreed that holding managers accountable for achieving the goals in their plans reinforced the importance of strategic planning. Park staff expressed the following concerns. The focus on accountability is not really new in the Park Service because park managers have always been held accountable for the results of their actions. The difference under the Results Act is that now managers will be held accountable for the measurable outcomes of their operations. Park managers were also concerned about being accountable for achieving servicewide goals where outcomes cannot be directly controlled by park managers. For example, the Park Service has a goal to improve air quality in certain parks. However, this will be difficult for park managers to do because of the many external environmental factors that affect air quality that are beyond their control. The operating environment of a park can change rapidly and park managers respond by moving resources to where they are needed most. Holding managers accountable for achieving the goals in their plans will reduce park managers’ flexibility in the short-term to move resources where they are needed most, especially when emergencies occur. In their view, such reduced flexibility to address emergency needs could have negative consequences for resource protection and the visitor experience. One of the most prominent challenges identified by Park Service officials was developing meaningful outcome-oriented goals that could be measured and for which managers could be held accountable. In particular, comments focused on the difficulty of measuring outcomes for natural resource protection and customer satisfaction. Park Service officials commented on the difficulty of selecting appropriate outcome measures for natural resources, such as water quality, endangered species, or disturbed lands. One official gave the following examples. A single goal may be difficult to apply uniformly across park units. For example, there are hundreds of water quality measures that are specific to the unique characteristics of individual parks. The water quality standards needed to support plants and animals can differ from one species to the next or may not yet be defined scientifically. In contrast, the water quality standard for safe recreational swimming can be determined and is frequently defined at the state level. The Park Service adopted the goal of reducing the number of days park recreational waters fail to meet state water quality standards for swimming since it could be clearly defined, was measurable, and applied more broadly within the park system than other water quality goals that had been identified. The need to develop goals narrow enough to be aggregated meaningfully at the servicewide level may exclude closely related goals developed by parks. Budget data taken from PMDS revealed that spending on the 31 long-term goals in the servicewide strategic plan represented 44 percent of the operating budget. The remainder of the budget was linked to other mission-oriented goals developed by individual parks. A Park Service official suggested that this may indicate that many park goals did not fit the specific definition of the goals in the servicewide plan. For example, the servicewide goal for restoring disturbed lands focuses only on disturbances caused by development or invasions of exotic species. However, many parks set related goals, such as restoring lands disturbed by flooding or past forest fire control practices. Such goals would appear in the performance management system as park-specific goals, although they were closely related to the servicewide goal. To address this problem, the Park Service will modify PMDS to allow park officials to indicate when a park goal is closely related to a servicewide goal. This reorientation of goal labeling might offer a truer picture of the portion of the budget that is allocated to servicewide priorities. It can be difficult to measure progress toward a final outcome because the final outcome itself is not easily measured. For example, the Park Service has a goal to return land disturbed by development or exotic species to its natural state. A particular park may not have any scientific criteria for determining exactly when the natural state has been recovered. However, a first step toward recovering the land is to remove the exotic species. This action can be measured in terms of acres and used as a proxy for the final outcome. Even when a measure can be developed, the units to be measured can vary greatly, making it difficult to interpret aggregate data. For example, both Independence National Historical Park in Philadelphia, Pennsylvania, and the Pierce-Klingle Mansion in Washington, D.C., are listed on the National Register of Historic Places. Therefore the servicewide goal to maintain historic structures in good condition would apply to both structures. However, a headquarters official suggested that Independence Park has greater historical significance than Klingle Mansion. It is necessary to look behind aggregate figures to determine the relative weight and importance of individual performance goals. There is also the issue of whether park managers can control all the outcomes in their parks. For example, according to a headquarters official, parks exist in a broader geographic context and do not control all the variables that affect the quality of park resources. Some goals related to reducing pollution were abandoned because the Park Service did not own the land that generated the pollution and therefore could not directly prevent it. Most Park Service officials agreed that parks lack key baseline data to begin measuring performance against goals. Two large natural resource parks said that the lack of baseline data to measure progress in preserving natural resources was a problem because they did not have extra funding in their base budgets to develop the data. Therefore, they will have to reduce spending on other park operations to free the necessary funds. To address the lack of baseline data, an OSP official said that parks were allowed to establish “threshold” goals aimed at establishing baseline performance data. After establishing baseline performance levels, park officials could then focus on achieving the percentage improvement goals contained in the servicewide strategic plan. Several plans we reviewed included threshold goals aimed at developing baseline performance data during fiscal year 1998. In the area of customer satisfaction, many parks were concerned that they were going to be held accountable for achieving improvements in customer satisfaction, but that parks did not yet have an instrument for measuring customer satisfaction. A headquarters official told us that the Park Service was in the process of addressing this issue by developing a servicewide Visitor Survey Card which will be distributed to all parks this summer. The card will contain questions pertaining to Park Service goals for visitor satisfaction and understanding. Randomly selected visitors would have the option to fill the card out. The Park Service will use a contractor to process the responses and report on the results. Another major challenge cited by park managers was the lack of systems needed to track actual spending by long-term goal. A park manager who commented on PMDS—the information system the Park Service has developed for parks to report on planned performance, spending, and FTEs—said the system worked well. However, in general, park managers expressed concern that PMDS only provides estimates of funds to be spent to accomplish annual goals, not actual funds spent. In addition to preparing data for PMDS, park managers must also prepare traditional financial plans and account for spending according to the activities in the budget and accounting systems. Park managers were hopeful that some integration of these systems could be achieved. In addition to the systems issue, a regional office official pointed out that some parks lack the computer hardware needed to input the data. One headquarters official acknowledged that computer hardware and systems were areas in which the Park Service has traditionally made minimal investments. However, he also said that information technology and systems have become easier to develop and use and steps will be taken in the near future to provide parks with the computers and Internet access they need to be linked servicewide. According to Park Service officials, the strategic planning process has led to increased communication and resource sharing within parks and provided useful information about how resources were being allocated. There was less agreement that the process had resulted in any major operational changes. Some park managers said the process confirmed that they were generally doing the right things. Others said the process led them to make changes to meet a goal they had identified or to question assumptions about park operations. Several park managers were hopeful that their strategic and annual performance plans would provide a more effective way to justify their budgets. Within some parks, the strategic planning process fostered increased communication across park divisions and facilitated resource sharing to accomplish common goals. For example, the strategic planning process helped break down barriers between park divisions, such as resource protection, interpretation, and maintenance at one park we visited. Through the process of identifying park goals, staff at this park found that each division contributed to common goals. For example, prior to the Results Act, mowing the lawn was viewed as a maintenance activity that was done for its own sake. The strategic planning process led maintenance staff to view this activity as contributing to multiple park goals that cut across division lines, including (1) protecting park land from erosion, (2) aiding visitation, and (3) aiding the interpretation of the park’s history by revealing the vistas seen by Civil War battle participants. The process of linking annual performance plans to park budgets provided useful information about how resources were being allocated to long-term goals. For example, staff from a large natural resources park said that aligning budget information with park goals provided factual confirmation that resource preservation goals received substantially fewer resources than visitor services goals. According to park staff, out of a budget of $8 million, a small share—about 6 percent—was being allocated to preserving natural resources in the park. They said having this information may lead them to reconsider the small share of spending on resource preservation. At the headquarters level, an official we spoke to hoped that parks’ annual performance plans—by requiring parks to link their goals to their operations—would provide headquarters with information not previously available about the choices, in terms of resource allocations, park managers are making among competing Park Service goals. There was less agreement on whether strategic planning had led to changes in park operations or how resources were allocated. Typically, staff said they went through the strategic planning process with open minds, but found that the process confirmed that current activities were consistent with the park’s legislation and contributed to the goals of the Park Service. A few park managers cited changes. For example, developing strategic and annual performance plans led one park to request funding for cataloging its archives because they recognized that this was a major, but unmet, goal. The park signed a memorandum of understanding with a university to do this work if funding becomes available. Some park staff said that the strategic planning process, by focusing on outcomes to be achieved, had led them to question assumptions about their operations and may help identify more effective and efficient ways to accomplish Park Service missions and make resource allocation decisions. For example, staff from a large natural resources park commented that half of the park’s budget is spent on maintenance activities, such as maintaining picnic tables for visitors. However, they reasoned that there may be a more effective way to allocate resources to achieve the goal of enhancing the visitor experience. In addition, staff at a service center said that, to meet one of their organizational efficiency goals, they will be designing an Internet site to provide information about the center’s services, thus reducing staff time needed to answer questions on the telephone. Finally, by showing what can be accomplished with existing base funds and staff, some park managers hoped that their annual performance plans would provide a more effective way to justify and communicate the need for increased resources. This view was not shared at one of the larger parks. These officials did not feel their strategic plan was an effective tool for communicating their resource needs. Their main concern was that their strategic plan could not be used to answer traditional questions about spending on activities, such as how much will be spent to reduce their backlog of maintenance tasks. We concluded in a previous report that by implementing the Results Act the Park Service can promote a better understanding by Congress and other stakeholders of (1) the agency’s and each park’s priorities, (2) the links between the agency’s and each park’s priorities, (3) the results achieved with the funds provided, and (4) the shortfalls in performance.As it sought to implement the Results Act, the Park Service has faced difficult circumstances, including multiple missions that are often competing and resistant to direct measurement, and extraordinarily decentralized operations, for which many parks possess distinct legislative mandates. The Park Service’s initial progress in implementing the Results Act has laid a foundation for future performance management improvements and provides valuable insights to other federal agencies or programs also characterized by complex missions, which are carried out by decentralized and largely autonomous operating units. In such an environment, strategic planning that is exclusively top-down will likely lead to goals that are irrelevant to and/or ignored by operating managers. To be effective, the process must reflect a partnership among key participants and include flexibility for managers within the parameters of the organization’s strategic goals. The active involvement of park managers in developing servicewide and park-level goals, coupled with the Park Service’s phased implementation approach, has led to greater ownership of the goals by field staff who are ultimately responsible for achieving results. By setting broad servicewide goals and simultaneously giving line managers the authority to tailor local goals and performance measures to their unique operating needs, the Park Service has greater assurance that a balance can be achieved between park-level attention to servicewide goals and the operating realities of each park. The strategic planning process called for by the Results Act and discussed in many of our recent testimonies and reports starts with an agency’s mission and the long-term goals it wishes to achieve and uses this information to shape the formulation and execution of agency budgets. For many years, the reverse has been the case—the budget and the budget process often shaped an agency’s plans, and costs were assigned by activity or item of expense rather than performance goal. The Park Service has taken an important step by asking managers to estimate the cost of achieving their goals and aggregating this information at the servicewide level. However, two questions remain: (1) whether the Park Service will be able to use strategic and annual performance planning to direct agency and park-level resources to affect the accomplishment of servicewide goals and (2) whether the new focus on accomplishing servicewide goals can be achieved without sacrificing the ability of park managers to respond to their unique operating environments. In the Park Service, as will likely be the case in many federal agencies, budget and accounting systems are more typically structured by organization, project, or activity than by goals. Thus, agencies will have to consider how best to achieve a linkage, whether by retaining existing budget structures and creating crosswalks to the goals in their plans or by reorganizing the structure of their program activities to better mirror goals in their strategic plans. The latter approach will require extensive dialog with congressional appropriations and oversight committees, parent departments, and OMB, and continued development of cost allocation systems. Although the challenges experienced by Park Service officials are hardly unique, their responses reflect long-term management commitment and an understanding that achieving results-oriented management will be neither easy nor quick. Park Service officials readily admit that much work remains to be done and that many of the successes they have achieved have occurred only after identifying and resolving problems in initial approaches or techniques. Overall, the experiences of the Park Service demonstrate that implementing the Results Act should be viewed not as a series of events, but as an evolving process. In commenting on a draft of this report, the Assistant Secretary for Fish and Wildlife and Parks substantially agreed with our portrayal of Results Act implementation in the Park Service. Interior provided information on several improvements to its planning and budgeting processes including: (1) providing parks greater flexibility to report on park-specific goals that are associated with, but not identical to, the servicewide goals, (2) modifying the Performance Management Data System so that parks can report on funds other than base funds used to accomplish their goals, and (3) entering fiscal year 1999 and 2000 budget and performance data at the park level to supplement the existing budget formulation process. Other comments are incorporated in the report as appropriate. Interior’s comments are reprinted in appendix I. We are sending copies of this report to the Ranking Minority Member of your Committee, the Chairmen of the House and Senate Committees on Appropriations; the Director, OMB; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-9573 if you or your office have any questions concerning this report. Major contributors to this report are listed in appendix IV. The following are GAO’s comments on the Department of the Interior’s April 10, 1998, letter. 1. See “Agency Comments and Our Evaluation” section of the report. 2. Report text was revised. 3. As indicated in the report, the Park Service is currently reconsidering its process for formulating requests for project funds. The Park Service’s approach to implementing the Results Act was phased and iterative and involved both top management and field staff. The four basic principles the Park Service followed while formulating and implementing a strategic plan involved (1) creating a useful management tool for the National Park Service at all levels of the organization, (2) achieving a field-orientation, (3) integrating all aspects of performance management into a single comprehensive system, and (4) complying with the requirements of the Results Act and associated mandates. Because both top management and field staff were engaged in drafting and exchanging comments on early draft strategic plans, Park Service staff have characterized the approach they used as “diagonal.” Both field staff and management were approaching the Results Act from their ends and meeting in the middle to work out needed changes. Both park staff and headquarters officials described this approach to planning as difficult and frustrating. However, the same staff commented that this approach was probably the only way to develop a servicewide plan that balanced the needs of both headquarters and field staff and therefore achieved a degree of acceptance and ownership by both groups. According to the Deputy Director, although the process of developing the servicewide strategic plan and the Park Service’s approach to performance management was “arduous,” in hindsight, the process worked well and he would not have done anything differently. At the time of our review, the Park Service’s Office of Strategic Planning (OSP) had a Director and two program analysts all located in Denver, Colorado, and one program analyst located at headquarters. The OSP director reports to the Deputy Director of the Park Service. OSP has been responsible for coordinating the development of the servicewide strategic plan based on input from headquarters and field staff and consultations with Congress, the Office of Management and Budget, the public, and other key stakeholders. OSP has also led the effort to implement strategic planning and performance management at the park level. This involved developing guidance for field staff on how to implement the Results Act, conducting training, and providing continuing guidance and responding to questions through OSP’s computer bulletin board. The Park Service held its first meeting on implementing the Results Act in December 1994. In May 1995, the Park Service established the Government Performance and Results Act (GPRA) Task Force to oversee and coordinate the development of a servicewide performance management system, including national strategic planning and budgeting, park- and program-level planning and goal setting, resource allocations, performance measurement, and servicewide evaluations and reporting. The Task Force includes representatives from each region, from the key headquarters offices, and from park and partnership programs. The Task Force reports its findings to the National Leadership Council (NLC), which consists of the Director, the Deputy Director, the seven regional directors, and the five associate directors for programs. The Task Force has been the Park Service’s mechanism for working out Results Act implementation problems. For example, OSP received 116 substantive comments from the public on the final draft of the servicewide strategic plan issued for comment in October 1996. The Task Force broke into groups, worked through the comments, and incorporated the comments into the final servicewide strategic plan, which was issued September 30, 1997. The Task Force has met three times a year to discuss Results Act implementation issues, vet policy recommendations, and make recommendations to NLC. NLC acts on every Task Force recommendation and, to date, has approved all Task Force recommendations. For example, NLC members each signed off on the final Park Service strategic plan. A key element of the Park Service’s approach to implementing the Results Act has been extensive field testing. During the summer of 1995, the Park Service undertook prototype exercises in strategic planning and performance measurement at six parks and three programs. The experiences of those prototype parks and programs along with help from planners at the Denver Service Center helped shape the Park Service’s “Eight Step Process” (described below) and develop some initial performance measures. During fiscal year 1996, a park or program from each cluster, known as “lead” parks, worked to refine the goals in the servicewide plan and the implementation process. The experience of the prototype and lead parks also led to the development of written guidance that could be used to train staff at other parks. The Park Service plans to continue learning from the field. For example, the Deputy Director has asked OSP to survey all the park-level strategic and annual performance plans for best practices. These best practices can then be transmitted to the field as guidance for the next iteration of park-level plans. The early experimentation of the prototype parks led to the development of the Park Service’s Eight Step Process for performance management. The Eight Step Process was designed to help parks, programs, and offices go from the Park Service’s mission goals, to their daily work, to evaluation of results. Steps one to five are the required elements of the strategic plans. Steps six and seven produce and implement the annual performance plan. Step eight produces the annual performance report that compares accomplishments to goals. The eight steps were developed within the framework of three basic questions—why, what, and how. A description of the Park Service’s Eight Step Process follows. (1) Review the Park Service’s enabling legislation and legislative history, the servicewide strategic plan, any other legislation affecting your park, and any other planning documents already in place. (2) Establish the mission of the specific park or program by its purpose and significance. Purpose is the specific reason the park or program was established. Significance is the distinctive features that make the park or program different from any other. Together they lead to a concise statement—the mission of the park or program. (3) Develop the park’s or program’s mission goals. Mission goals are broad conceptual goals based on ideal future conditions. They should focus on results (outcomes) not efforts. Park and program mission goals should reflect both the servicewide mission goals as well as the mission of the park or program. (4) Determine the park or program’s 5-year, long-term goals (range of 3 to 20 years). Long-term goals tier off mission goals, describe results to be achieved, and are stated as desired future conditions. (5) Establish the availability of human and fiscal resources, the condition of park natural, cultural, and recreational resources, and the condition of visitor experiences. (6) Develop the annual performance plan. The annual performance plan links outcome-related performance goals to specific inputs and outputs for a single year. The annual performance plan consists of two major parts: annual goals and annual work plans. Annual goals are the incremental outcomes needed to meet the long-term goals. Annual work plans identify the inputs and outputs needed to achieve the annual goals. Inputs are the fiscal and human resources required to produce the outputs. (7) Implement the annual performance plan. Park and program officials receive budget allocations and update annual goals to reflect available funding and staffing and use these resources to implement their plans. (8) Develop annual performance reports. Park and program officials monitor performance toward annual goals, evaluate results by comparing accomplishments with goals, and provide feedback and adjust subsequent annual goals, work plans, and long-term goals, if necessary. The Park Service developed its Field Guide to GPRA and Performance Management to provide park staff with a tool for improving their ability to accomplish the mission of the Park Service using performance management techniques. The Field Guide includes an overview of the Results Act and performance management, a detailed discussion of the Eight Step Process, a discussion of how the budget will be linked to annual performance plans, other key linkages, examples of strategic plans and annual performance plans already developed by certain parks, and exercises to be completed during training sessions. Following initial training and testing of performance management in prototype and lead parks, the Park Service held “train the trainer” sessions in which 100 regional- and field-level staff received training in performance management using the Field Guide. These sessions were held in late 1996 in four locations throughout the country. During the late winter and spring of 1997, the trainers trained an additional 2,000 park staff in performance management techniques. A representative from each regional office also received the initial training and served as Results Act coordinator in their regions. After the training was completed, the trainers continued to serve as consultants to the parks within their region. For example, in the southeast region, the trainers were involved in the first review of all the strategic plans for that region. In addition to the Field Guide, OSP developed a quick reference pamphlet to strategic planning at the Park Service entitled GPRA on the Go: Government Performance and Results Act (GPRA) & Performance Management. Staff from two parks we interviewed cited this pamphlet as an excellent summary of everything a manager needed to know to prepare a strategic plan. The pamphlet contains brief descriptions of (1) performance management and the Results Act, (2) the Park Service’s approach to performance management, (3) Park Service performance management terminology, (4) strategic plan requirements, (5) the Eight Step Process, and (6) the Park Service’s 31 long-term goals. It also provides helpful hints for making performance management happen and lists key OSP and regional office staff who can be contacted for further information. Figure 4 in the body of the report portrays the relationship between the servicewide and park-level plans. Within its four servicewide goal categories, the Park Service has defined 9 mission goals and 31 long-term goals. Appendix III contains a complete description of the goals listed in the Park Service’s strategic plan. Mission goals were intended to reflect the Park Service’s preservation mission, which has a longer and indefinite time frame for goals than anticipated by the Results Act. Mission goals are not time-bound or quantified, but are intended to be comprehensive and inclusive of all Park Service activities. For example, mission goal Ia states that “natural and cultural resources and associated values are protected, restored, and maintained in good condition and managed within their broader ecosystem and cultural context.” Long-term goals typically span 5 years, are focused on specific Park Service activities, and provide specific measurable goals to be achieved within the time frame set. For example, long-term goal Ia5 states that “by September 30, 2002, 50% of the historic structures on the 1998 List of Classified Structures are in good condition.” Each park was also expected to develop its own strategic plan. Park strategic plans were to bring together both servicewide and park-specific missions so that every strategic plan had both national and local elements. If a servicewide long-term goal was applicable, the park was expected to incorporate the goal into its plan, although measurable performance targets could vary from the servicewide targets. It was expected, for example, that some parks could easily achieve performance greater than national goals, while some might necessarily fall short; allowing performance targets to vary from park to park promoted park-level relevance while ensuring that performance could be aggregated on a servicewide basis. The Park Service also gave park managers discretion to incorporate long-term goals that were unique to the missions of their individual parks but still fit within the broad mission goals of the servicewide strategic plan. The National Park Service preserves unimpaired the natural and cultural resources and values of the national park system for the enjoyment, education, and inspiration of this and future generations. The Park Service cooperates with partners to extend the benefits of natural and cultural resource conservation and outdoor recreation throughout this country and the world. Mission Goal Ia: Natural and cultural resources and associated values are protected, restored, and maintained in good condition and managed within their broader ecosystem and cultural context. Long-term Goals to be Achieved by September 30, 2002: Ia1. Disturbed Lands / Exotic Species — 5% of targeted disturbed park lands, as of 1997, are restored, and 5% of priority targeted disturbances are contained. Ia2. Threatened and Endangered Species — 25% of the 1997 identified park populations of federally listed threatened and endangered species with critical habitat on park lands or requiring NPS recovery actions have an improved status, and an additional 25% have stable populations. Ia3. Air Quality — Air quality in at least 50% of class I park areas improves or does not degrade from 1997 baseline conditions. Ia4. Water Quality — Reduce by 10%, from 1997 levels, the number of days park recreational waters fail to meet state water quality standards for swimming. Ia5. Historic Structures — 50% of the historic structures on the 1998 List of Classified Structures are in good condition. Ia6. Museum Collections — 68% of preservation and protection conditions in park museum collections meet professional standards. Ia7. Cultural Landscapes — 50% of the cultural landscapes on the Cultural Landscapes Inventory are in good condition. Ia8. Archeological Sites — 50% of the recorded archeological sites are in good condition. Mission Goal Ib: The National Park Service contributes to knowledge about natural and cultural resources and associated values; management decisions about resources and visitors are based on adequate scholarly and scientific information. Long-term Goals to be Achieved by September 30, 2002: Ib1. Natural Resource Inventories — Acquire or develop 434 of the 2,287 outstanding data sets identified in 1997 of basic natural resource inventories for all parks. Ib2. Cultural Resource Baselines — The 1997 baseline inventory and evaluation of each category of cultural resources is increased by a minimum of 5%. Mission Goal IIa: Visitors safely enjoy and are satisfied with the availability, accessibility, diversity, and quality of park facilities, services, and appropriate recreational opportunities. Long-term Goals to be Achieved by September 30, 2002: IIa1. Visitor Satisfaction — 80% of park visitors are satisfied with appropriate park facilities, services, and recreational opportunities. IIa2. Visitor Safety — Reduce the visitor safety incident rate by 10% from the NPS five-year (1992-96) average. Mission Goal IIb: Park visitors and the general public understand and appreciate the preservation of parks and their resources for this and future generations. Long-term Goals to be Achieved by September 30, 2002: IIb1. Visitor Understanding and Appreciation — 60% of park visitors understand and appreciate the significance of the park they are visiting. Mission Goal IIIa: Natural and cultural resources are conserved through formal partnership programs. Long-term Goals to be Achieved by September 30, 2002: IIIa1. Properties Designated — Increase by 15%, over 1997 levels, the number of significant historic and archeological properties protected through federal programs or official designation at local, state, tribal, or national levels. IIIa2. Properties Protected — Increase by 20%, over 1997 levels, the number of significant historic and archeological properties protected nationwide through federal, state, local, or tribal statutory or regulatory means, or through financial incentives, or by the private sector. IIIa3. User Satisfaction — Achieve a 10% increase in user satisfaction, over 1997 levels, with the usefulness of technical assistance provided for the protection of historic and archeological properties. Mission Goal IIIb: Through partnerships with other federal, state, and local agencies and nonprofit organizations, a nationwide system of parks, open space, rivers, and trails provides educational, recreational, and conservation benefits for the American people. Long-term Goals to be Achieved by September 30, 2002: IIIb1. Conservation Assistance — 1,100 additional miles of trails, 1,200 additional miles of protected river corridors, and 35,000 additional acres of parks and open space, from 1997 totals, are conserved with NPS partnership assistance. IIIb2. Community Satisfaction — 80% of communities served are satisfied with NPS partnership assistance in providing recreational and conservation benefits on lands and waters. Mission Goal IIIc: Assisted through federal funds and programs, the protection of recreational opportunities is achieved through formal mechanisms to ensure continued access for public recreational use. Long-term Goals to be Achieved by September 30, 2002: IIIc1. Recreational Properties — The 40,000 recreational properties, as of 1997, assisted by the Land and Water Conservation Fund, the Urban Park and Recreation Recovery Program, and the Federal Lands to Parks Program are protected and remain available for public recreation. Mission Goal IVa: The National Park Service uses current management practices, systems, and technologies to accomplish its mission. Long-term Goals to be Achieved by September 30, 2002: IVa1. Data Systems — 50% of the major NPS data systems are integrated/interfaced. IVa2. Employee Competencies — 100% of employees within the 16 key occupational groups have essential competency needs identified for their positions. IVa3. Employee Performance — 100% of employee performance standards are linked to appropriate strategic and annual performance goals. IVa4. Workforce Diversity — Increase by 25%, over 1998 levels, the representation of under-represented groups in each of the targeted occupational series in the NPS permanent workforce. IVa5. Employee Housing — 35% of employee housing units classified as being in poor or fair condition in 1997 have been removed, replaced, or upgraded to good condition. IVa6. Employee Safety — Reduce by 50%, from the NPS five-year (1992-96) average, the NPS employee lost time injury rate, and reduce the cost of new workers’ compensation cases (COP) by 50% based on the NPS five-year (1992-96) average. IVa7. Construction Projects — 100% of NPS park construction projects identified and funded by September 30, 1998, meet 90% of cost, schedule, and project goals of each approved project agreement. IVa8. Land Acquisition — The time between the appropriation for land acquisition and when the offer is made is reduced by 5%. Mission Goal IVb: The National Park Service increases its managerial capabilities through initiatives and support from other agencies, organizations, and individuals. Long-term Goals to be Achieved by September 30, 2002: IVb1. Volunteer Hours — Increase by 10%, over the 1997 level, the number of volunteer hours. IVb2. Donations and Grants — Increase by 10%, over 1997 levels, the dollar amount of donations and grants. IVb3. Concession Returns — Increase the average return for park concession contracts to at least 10% of gross concessioner revenue. IVb4. Fee Receipts — Increase by 20%, over the 1997 level, the amount of receipts from park entrance, recreation, and other fees. Michael J. Curro, Assistant Director, (202) 512-9969 Elizabeth H. Curda, Evaluator-in-Charge Claudia J. Dickey, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed: (1) how the Government Performance and Results Act has influenced planning and budgeting at the National Park Service (NPS); (2) the extent to which strategic and annual planning and budgeting processes have become linked and the challenges in achieving such a linkage; and (3) any insight that NPS's experiences with results-oriented management and budgeting suggest for other agencies implementing the Results Act. GAO noted that: (1) NPS implemented the Results Act by instituting a results-oriented planning process that has introduced for the first time servicewide goals to be achieved by park managers; (2) at the same time, NPS addressed the diversity and decentralized nature of the park system by requiring parks to develop strategic plans to address both applicable servicewide goals as well as goals specific to their unique legislative and operating environments; (3) both NPS and individual parks and programs have prepared strategic and annual performance plans with measurable outcome-oriented goals; (4) to link these plans to their budget, NPS designed an information system to report park estimates of spending according to goals; (5) although NPS has made some progress in connecting performance plans with budgets, significant issues remain to be resolved; (6) NPS's efforts to track actual spending according to performance goals have been hampered by the incompatibility between its activity-oriented budget and accounting systems and its goal-oriented strategic plan; (7) the most frequently cited challenges involved performance measurement and information systems; (8) performance measurement is complicated by the difficulty of defining outcome-oriented performance measures; (9) park staff also identified as a challenge the absence of information systems that link spending information to goals; (10) despite limited experience with managing for results, parks reported some benefits from their initial efforts; (11) benefits included better information about how park resources were being spent on desired park outcomes and increased communication and resource sharing across division lines; (12) although NPS is still in the early stages of implementing the Results Act, the progress it has made and the challenges that remain provide valuable insights that could prove useful to other agencies as they implement the act; (13) NPS has demonstrated how to develop an agencywide strategic planning process in a decentralized operating environment; (14) NPS officials recognized that strong field-level involvement in developing the servicewide plan and the field-oriented approach to implementing the Results Act resulted in greater ownership by the field staff charged with achieving the results; and (15) however, changes to these systems will require extensive consultations and consensus among the agency, the Office of Management and Budget, and Congress.
Travelers to the United States are generally required to present documentation verifying their identity and nationality and, for non-U.S. citizens, their eligibility to enter the United States. Acceptable travel documents for entry into the United States include, among others, passports, visas, and U.S military identity cards. In 2004, Congress, in an effort to further secure U.S. borders, mandated the development and implementation of a plan that requires U.S. citizens to have a passport or other document that demonstrates their identity and citizenship when entering the United States. State and DHS implemented this requirement for air ports of entry on January 23, 2007, and are to implement the requirement for land and sea ports before June 1, 2009. State’s Bureau of Consular Affairs is responsible for the design and issuance of passports for U.S. citizens and visas for all foreign aliens requiring a visa for entry into the United States. CBP is responsible for inspecting these documents and permitting entry to travelers at designated air, land, and sea U.S. ports of entry. In addition, State’s Bureau of Diplomatic Security, in collaboration with State’s Office of Inspector General, DHS and other U.S. agencies, and foreign law enforcement entities, is responsible for investigating suspected fraud of passports and visas. The security of passports and visas and the ability to prevent and detect their fraudulent use are dependent upon a combination of well-designed security features, solid issuance procedures for the production of the document and review of the application, and solid inspection procedures that utilize available security features. Figure 1 below presents the key elements of a secure travel document. A well-designed document has limited utility if it is not well-produced or inspectors do not utilize the security features to verify the authenticity of the document and its bearer. In fiscal year 2006, about 12 million passports and almost 6 million visas were issued, according to State. As of April 2007, there are 74 million valid passports and almost 34 million visas, including 9 million BCCs, in circulation. A passport is not only a travel document required of U.S. citizens for international travel and re-entry into the United States by air, but also an official verification of the bearer’s origin, identity, and nationality. Under U.S. law, the Secretary of State has the authority to issue passports, which may be valid for up to 10 years. Only U.S. nationals may obtain a U.S. passport, and evidence of citizenship or nationality is required with every passport application. Federal regulations list those who do not qualify for a passport, including those who are subjects of a federal felony arrest warrant. See appendix II for additional information on the types of U.S. passports. In addition, State is currently developing a passport card that will serve as an alternative travel document for re-entry into the United States by U.S. citizens at land and sea ports of entry. A visa is a travel document for people seeking to travel to the United States for a specific purpose, including to immigrate, study, visit, or conduct business; the document allows a person to travel to a United States port of entry and ask for permission to enter the country. While consular officers within State are responsible for determining a person’s eligibility to enter the United States for a specific purpose, CBP officers have the ultimate authority to permit entry into the United States. State issues two types of visas: (1) a visa foil attached to the visa pages of a foreign passport, for nonimmigrant or immigrant travel to the United States; and (2) the BCC for limited travel by Mexican citizens within the United States’ southern border. Visas can be issued for a validity period of up to 10 years. Threats to the security of travel documents include counterfeiting a complete travel document, construction of a fraudulent document, photo substitution, deletion or alteration of text, removal and substitution of pages, theft of genuine blank documents, and assumed identity by imposters. Features of travel documents are assessed by their capacity to secure a travel document against the following: counterfeiting: unauthorized construction or reproduction of a travel document. forgery: fraudulent alteration of a travel document. imposters: use of a legitimate travel document by people falsely representing themselves as legitimate document holders. Most reported passport and visa fraud is imposter-related fraud. In fiscal year 2006, CBP detected 21,292 fraudulent U.S. passports, visas, and BCCs presented by travelers attempting to enter the United States through a U.S. port of entry. (See table 1.) Nearly 80 percent of these documents were genuine documents presented by imposters. The most frequent fraudulent attempts were by imposters attempting to use a legitimate BCC, while the fraudulent use of passport and visa more often involved attempts to counterfeit or alter the document. The following cases illustrate attempts to fraudulently use U.S. travel documents to enter the United States: In November 2005, CBP officers intercepted a Ghanian citizen with an altered U.S. visa. The visa photo was manually retouched to bear closer resemblance to the photo substituted into the biographical page of the passport. In June 2006, a Chinese citizen was found in possession of a counterfeit U.S. passport. Printing and other errors on the biographic page and another page alerted authorities that the passport was counterfeit. In January 2007, a Brazilian citizen, using a genuine U.S. visa, attempted to enter the United States as an imposter. CBP officers confirmed the traveler was an imposter and was attempting to enter the United States to seek employment. Applicants commit passport application fraud through various means, including submitting false claims of lost, stolen, or mutilated passports; child substitution; and counterfeit citizenship documents. According to State’s Bureau of Diplomatic Security investigators, imposters’ use of assumed identities, supported by genuine but fraudulently obtained identification documents, is a common and successful way to fraudulently obtain a passport. This method accounted for about 65 percent of 3,703 total confirmed passport fraud cases investigated by the bureau in fiscal year 2006, according to Diplomatic Security documentation. To combat document fraud, security features are used in a wide variety of documents, including currency, identification documents, and bank checks. Security features are used to prevent or deter the fraudulent alteration or counterfeiting of such documents. In some cases, an altered or counterfeit document can be detected because it does not have the look and feel of a genuine document. For instance, detailed designs and figures are often used on documents with specific fonts and colors. While such aspects are not specifically designed to prevent the use of altered or counterfeit documents, inspectors can often use them to identify nongenuine documents. In some cases, security features can be observed with the naked eye. But for others, tools may be necessary to verify the existence of a security feature. For instance, to read microprinting, it may be necessary to have a magnifying glass or a loupe. To see features on pages printed with ultraviolet fluorescent ink, it is necessary to have an ultraviolet light source. In particular, electronic equipment is required to read electronic features such as biometrics or digital signatures from the travel document. While security features can be assessed by their individual ability to help prevent the fraudulent use of the document, it is more useful to consider the entire document design and how all of the security features help to accomplish this task. Layered security features tend to provide better document security by minimizing the risk that the compromise of any individual feature of the document will allow for the unfettered fraudulent use of the document. Individual document security features are known to different levels of people. For instance, some security features are known only by forensic examiners, while other features are more widely known by specialized law enforcement personnel. GPO produces and delivers blank passports to the domestic passport- issuing offices. State operates 17 domestic passport-issuing offices, where most passports are issued each year. In addition, in the spring of 2007, State opened a new passport production facility for the personalization of passport books. The majority of passport applications are submitted by mail or in person at one of 8,500 passport application acceptance facilities nationwide, which include post offices; federal, state and probate courts; public libraries; and county and municipal offices. The passport acceptance agents at these facilities are responsible for, among other things, verifying whether an applicant’s identification document, such as a driver’s license, actually matches the applicant. Then, at the domestic passport-issuing offices, passport examiners determine—through a process called adjudication—whether they should issue each applicant a passport. See appendix IV for an overview of the passport issuance process. State manages the visa process, as well as the consular officer corps and its functions at 219 visa-issuing posts overseas. The process for determining who will be issued or refused a visa contains several steps, including documentation reviews, in-person interviews, collection of biometrics (facial image and fingerprints), and cross-referencing an applicant’s name against the Consular Lookout and Support System (CLASS)—State’s name-check database that posts use to access critical information for visa adjudication. In some cases, a consular officer may determine the need for a Security Advisory Opinion, which is information provided from Washington to the post regarding whether to issue a visa to the applicant. See appendix IV for an overview of the visa issuance process. In general, at ports of entry, travelers seeking admission to the United States must present themselves and a valid travel document, such as a passport or a U.S. visa, for inspection to a CBP officer. The immigration- related portion of the inspections process requires the officer to determine—by questioning the individual and inspecting the travel documents—if the traveler is a U.S. citizen or alien. If the traveler is an alien, CBP officers must determine the purpose of the individual’s travel and whether the alien is entitled to enter the United States. During the inspections process, CBP officers must confirm the identity and nationality of travelers and determine the validity of their passports and visas by using a variety of inspection techniques and technology. At the first part of the inspection process—primary inspection—CBP officers inspect travelers and their travel documents to determine if they may be admitted or should be referred for further questioning and document examination. If additional review is necessary, the traveler is referred to secondary inspection—an area away from the primary inspection area—where another officer makes a final determination to admit the traveler or deny admission for reasons such as the presentation of a fraudulent or counterfeit passport or visa. See appendix V for an overview of the inspection process at U.S. ports of entry. State has made enhancements to strengthen new generations of passports and visas, which contain a variety of security features that, in combination, are intended to deter attempts to alter or counterfeit the documents; however, prior generations of these documents have been fraudulently used and remain more vulnerable to fraudulent attempts for the duration of their life span. While the process for designing a new document takes several years to complete, State does not periodically reassess the security features of the travel documents it currently issues to identify their effectiveness against evolving counterfeit and alteration threats and to plan for new generations of travel documents. In addition, State shares information on the security features of passports and visas with domestic and international entities. Passports and visas contain a variety of security features that, in combination, are intended to deter attempts to alter or counterfeit the documents. The design of passports—currently there are three generations valid for travel—contain a range of security features to protect against their fraudulent use. Visa foils—currently there are two generations valid for travel—and the BCC also contain a range of security features to protect against their fraudulent use. Enhancements have been made to strengthen new generations of these documents, but prior generations remain more vulnerable to fraudulent attempts during their life span. Although none of the passports and visas that are currently valid have had all of their security features compromised, some methods of alteration or counterfeiting have been found to be successful enough to fool an initial inspection. In these cases of sophisticated attempts to defeat specific security features, only a more detailed examination of the document can determine that the document is not authentic. According to State, over 74 million passports are currently in circulation, as of April 2007. Currently, there are three valid generations of the passport—the 1994 passport, the 1998 photo-digitized passport, and the 2006 electronic passport (e-passport). See table 2 for validity periods for travel and numbers in circulation of current passports. Each generation of the passport has a range of security features to provide protection against the threat of fraudulent use. As each generation of passports is developed, some security features are enhanced, others added, and others dropped from the documents’ design to protect against counterfeit and alteration threats. For example, photo substitution, particularly with the 1994 passport, is one technique that has been used to alter passports. State has enhanced subsequent generations to combat this threat. In the 1998 passport, State enhanced the laminate of the passport and introduced a photo-digitized passport that prints scanned photographs on the biographic page of the passport to eliminate the possibility of individuals cutting out and replacing the laminated photos. While the vulnerability to photo substitutions has been reduced in the 1998 passport, it has not been fully eliminated. For the e-passport, although State continues to print the photos in the same way as the prior generation, additional enhancements have been made to the security of the laminate and a proximity radio frequency identification (RFID) chip has been added that provides for electronic storage of biographical and biometric data. The information stored on the chip is protected by a digital signature. This enhancement, which allows for a comparison of the photo in the passport with the photo in the chip, can provide greater assurance that the photo, as well as the biographic data, has not been altered or counterfeited. In cases where these enhancements may fail to work correctly, it is important to plan for the potential failure of equipment or incidents where the verification system does not correctly match individuals. In addition, the proposed passport card is expected to include laser engraving, tactile features in the photo area and an optically variable devise to address photo substitution techniques. Additional information on the security features in passports and visas issued by the State Department are sensitive in nature and have not been provided in this report. We will be reporting on the security features in these documents in a separate report. According to State, over 34 million visas are in circulation as of April 2007. Currently, there are two valid generations of the visa foil—the Teslin and the Lincoln—the foil is attached inside a foreign passport using an adhesive. The only currently valid generation of the BCC—issued to Mexican citizens—is the laser visa, a polycarbonate card with an optical stripe to electronically store information about the BCC holder. See table 3 for data on validity periods for travel and numbers in circulation of current visas. As with the passport, when the Lincoln visa was developed, some security features were improved over those in the Teslin visa, others added, and others dropped. Enhancements to the Lincoln visa include more detailed printing features and features such as security fibers and biometric information (digital photograph and fingerprints). The biometric information is collected overseas under State’s Biometric Visa Program, to be used by CBP inspectors at ports of entry to verify that the original visa applicant is the person entering the United States. For the BCC, State stored the traveler’s biographical and biometric information electronically on the optical media of the card. State’s process for the development and testing of new travel documents, to enhance their security and reduce vulnerability to sophisticated fraud attempts, varied by document and required several years to complete. While State has made adjustments in the design of passports and visas, its approach has been largely reactive. Despite the length of time required of a document redesign, State does not have a structured process to periodically reassess the security features in its documents and to plan for new generations. The increasing pace of technological change and use of electronics makes State’s current approach less viable than it might have been in the past, and best practices in currency design, for example, suggest that periodic evaluation of designs and introduction of new security features are more viable approaches in the management of counterfeit and alteration threats. The process for developing the new e-passport design took almost 3 years. State initiated the redesign of the passport in 2003 in response to new international specifications for electronic travel documents, to meet standards set for nations that participate in the United States’ Visa Waiver Program, and to address sophisticated attempts to compromise the document with additional layers and enhancements of security features. In February 2005, State presented the proposed design for the new passport, which was intended to comply with ICAO standards. From 2005- 2006, State, together with GPO, utilized government expertise at FDL and the NIST to test the durability of the book and certain security features of the e-passport and emergency passport. In response to security and privacy concerns regarding the inclusion of RFID chip technology, NIST was also requested to evaluate the passport’s skimming vulnerability. Based on the results of NIST’s tests, material was added to the front cover and spine of the book to mitigate the threat of skimming. Separately, durability testing conducted at NIST also revealed that some of the security features were adversely affected by humidity. State and GPO reviewed the results of NIST’s tests and determined that the overall integrity of the passport remained sufficient and did not make any immediate changes to the design, according to State and GPO officials. A State official did note that these results would be considered in the future. In January 2006, State and DHS conducted pilot tests for the new passport, using diplomatic versions of the e-passport. See figure 2 for a timeline of the development process for the e-passport. Appendix III provides additional information on the testing that was conducted in the development of the e-passport design. The development of the Lincoln visa design took about 4 years. The visa was developed in response to advanced attempts to counterfeit and alter the Teslin visa, according to State. To quickly address the sophisticated alteration attempts to the Teslin visa while the Lincoln visa was under development, State developed a new version of the Teslin visa—the MRV- 2000—as a short-term solution for addressing the counterfeit threat. The MRV-2000 was tested in 1999 and issued in 2000. State was able to make minor changes on short notice, such as additional coding, to distinguish the MRV-2000 for inspection purposes and provide a short-term solution during the several years it took for the redesign of the visa to be completed. From 1998 to 1999, State requested industry experts to help in the development of the design and conducted studies of available security papers and ink jet printers. Various paper suppliers and NIST conducted vulnerability tests, demonstrating the durability of features on available papers. According to State officials, after identifying currently advantageous security features, State then moved into the selection of paper, glue, release liner process offset, and florescent inks. FDL provided forensic testing, such as chemical sensitivity testing, for the selected design. See figure 3 for the timeline of development for the Lincoln visa. The development of the new passport card is expected to take a little more than 2 years, according to current State and DHS plans. State, in consultation with DHS, has been developing a new passport card since early 2006. In January 2006, State and DHS announced the development of the passport card. On May 25, 2007, the solicitation request for proposal for the passport card was released. According to State and DHS plans, from July until December 2007, the proposals will be reviewed and testing will be conducted, including durability testing. State expects to begin issuing the new cards in 2008. State updates or changes the security features of its passports and visas in response, in part, (1) to detected attempts to counterfeit or alter these documents and (2) to recommended international standards for secure travel documents. State also made improvements to the passport to match requirements for enhanced security features in the passports from countries in Visa Waiver Program. State obtains information on the detected attempts to counterfeit or alter passports and visas from a variety of sources in the United States and other nations, according to State officials. For example, State occasionally receives information gathered from DHS regarding the seizures of fraudulent passports and visas. Specifically, FDL provides forensic analysis of identified alterations and counterfeit attempts and CBP’s Fraudulent Document Analysis Unit provides trend analysis on the types of fraudulent attempts intercepted at the border. The unit forwards these seized documents—primarily passports—to State, according to Fraudulent Document Analysis Unit and State officials. However, the information that State receives on passport and visa fraud is not centrally collected or analyzed by State for purposes of planning or reassessing the document’s security. According to State, information received on the fraudulent use of passports and visas is reviewed largely on a case-specific basis. Moreover, data on this information are not collected or analyzed by State in order to identify counterfeit or alteration trends. U.S. currency faces threats similar to those of passports and visas. According to the National Academies, life-cycle planning can be an effective way to reassess document security and plan for new documents by providing a structured process for re-evaluating the features of the document against evolving counterfeit and alteration threats. The National Academies found that, for bank notes, advances in reprographic technology have made securing currency more challenging, necessitating regular assessments of technologies and threats. According to the National Academies, by continuously evaluating currency designs and introducing new security features, the government does an effective job of staying ahead of counterfeiting threats. In addition, the U.S. Department of the Treasury’s Bureau of Engraving and Printing has reported that protecting U.S. currency is an ongoing process. According to the bureau, it plans to introduce new currency designs every 7 to 10 years. Although State has recently enhanced some of the security features and introduced new security features to the passport, State, for the documents it issues, does not have a policy for reassessing the design’s resistance to evolving counterfeit and alteration threats and planning for new generations of travel documents. For example, although the BCC has been in circulation for almost 10 years, State has not had any formal plans to reassess the current document or develop a new BCC until recently. In responding to a draft copy of this report, State noted that they are currently redesigning the next generation of the BCC for deployment in 2008, when the current BCCs begin to expire. A structured process for periodically reassessing the security features in documents and planning for new generations should include a policy for reassessing the ability of the document design to resist compromise and fraudulent attempts to the documents. For example, to meet acceptable standards for the use of driver’s licenses and identification cards for official purposes, DHS has proposed establishing a policy for annual review of the card design of such documents. This proposed review would address the cards’ ability to resist counterfeit and alteration attempts in several areas, including photo substitution, modification of data, duplication, and reproduction, among others. Such a review of the security features in the passport design, such as long-term vulnerability testing of the chip technology and print durability, could identify potential vulnerabilities in these features before they could be exploited. State shares information on the security features and fraud attempts of passports and visas with U.S. entities, including CBP, FDL, and state and local law enforcement, as well as with State’s overseas counterparts, according to agency documents and officials. Specifically, State’s Fraud Prevention Program distributes newsletters identifying detected attempts to counterfeit, alter, or fraudulently obtain visas and related fraud to DHS entities and U.S. missions overseas. State also bilaterally shares information on the security features of passports and visas to deter the fraudulent use of these travel documents overseas. For example, State conducts fraud prevention training for host government law enforcement and immigration authorities and also works with host governments on U.S. passport and visa fraud investigations and prosecutions. In addition, State participates in the multilateral organization ICAO to promote travel document security and global interoperability. While State has shared information on the security features and activities related to the travel documents it issues, including the e-passport, State is only beginning to share information that is necessary to verify the authenticity of the electronic data stored in the chip of the e-passport. The international community, through ICAO, has established a directory for international validation of digital signatures of e-passport chips. The United States is currently taking steps to join the directory and share its public key. State and GPO have enacted several measures to ensure the security and physical quality of passports and are working to address weaknesses identified in the passport issuance process; however, additional measures are needed to strengthen the process and minimize vulnerabilities. Specifically, State’s lack of an oversight program for about 8,500 passport acceptance facilities nationwide continues to present a significant fraud vulnerability. State has made recent improvements to the visa issuance process and is working to address identified weaknesses. GPO has established measures to safeguard the physical security and integrity of the passport book and materials and continues to review and strengthen these measures. In the manufacturing process, GPO has identified measures to secure production materials and blank passport books and to ensure the quality of the books. Specifically, GPO has identified control measures in place for the materials used in the production of passports, including the paper, ink, design, binding, and chip and for the blank books. A 2004 GPO Inspector General security review found vulnerabilities in the physical controls of the blank passport, including the delivery of blank books to passport agencies. GPO has taken steps to improve its internal controls for passport production as a result of this review and other recent GPO Inspector General reviews, according to GPO Inspector General officials. In addition, GPO has established quality assurance measures for the production of the 1998 passport and e- passport to ensure the books are manufactured to proper specifications. For example, GPO staff inspects the quality of the product at stages throughout the manufacturing process, including inspections of the supply materials. GPO has also established procedures to inspect, analyze, and document metrics associated with quality of the passport. In addition, GPO is also instituting an independent inspection entity that will be responsible for conducting unannounced and random inspections at points in the manufacturing process to verify that quality standards are met. For the e-passport, GPO has identified procedures for inspecting the quality of the chip at several steps along the manufacturing process, while additional measures to further ensure the quality of electronic technology in the e-passport book are under development. According to GPO officials, the established automated system for inspecting the quality of the chip is satisfactory, but the physical quality assurance process is still being developed. Specifically, GPO officials said they are studying international technology standards and lessons learned from international counterparts to develop additional quality assurance procedures for the e-passport manufacturing process. State has taken several steps to ensure the integrity of the passport throughout the issuance process—including establishing internal control standards, conducting periodic audits and other internal reviews, and establishing quality assurance measures for passport processing. For example, State has identified control measures at its passport offices to safeguard passport applications, passport books, and other production supplies. Specifically, State’s internal controls handbook for domestic passport offices provides guidance for ensuring the integrity of passport operations, including guidance for (1) employee integrity and conduct, (2) applications receipt, (3) counter applications, (4) cashiering, (5) adjudication, (6) blank book control, (7) duty officer program, and (8) protection of the premises and information. According to State officials, the handbook is currently being updated to further strengthen controls and address identified weaknesses. The handbook identifies and provides procedures for areas of identified vulnerability, including the accountability of passport books, money, and adjudicative decision making, but it does not include internal controls for the passport-related functions performed at the acceptance facilities. To ensure compliance with these measures, State conducts periodic management assessments and internal control reviews for each domestic passport office as well as periodic audits and other internal reviews of its passport issuance process. These reviews cover general management, use of facilities, adjudication, customer service, fraud prevention, passport book processing, and internal controls, as well as provide recommendations for the improvement of operations. In addition, State also conducts periodic audits and other internal reviews of its passport issuance process. First, the management at the domestic passport offices conducts weekly and biweekly audits of adjudicated applications to review compliance with adjudication guidelines. Second, State’s passport service management in Washington, D.C., has recently taken steps to conduct periodic validation studies, which are large-scale audits of passport applications, at all passport offices. According to State officials, a pilot validation study was conducted in 2006, reviewing over 20,000 adjudicated applications. These officials indicated that they are currently developing the methodology and implementation plan for future validation studies. Third, another management-led effort took place in the summer of 2006, when State’s Passport Office convened a number of working groups in Washington, D.C., to improve passport operations and address recommendations raised by prior GAO and State Inspector General reports. Specifically, these working groups focused on areas such as the national fraud prevention program; internal controls; and fraud metrics, statistics, and trend analysis. As a result of the recommendations of these working groups, State’s passport management plans to implement several initiatives in the next year to improve overall operations. State also has measures in place to ensure the quality and accuracy of passports issued to applicants. For example, passports are inspected after the applicant’s information has been added to the blank passport book to verify the information has been correctly printed and, for e-passports, stored onto the chip. In addition, each e-passport is tested at the issuing passport agency to ensure that the personalized chip can be read by an e- passport reader. In addition to the efforts described above, State occasionally modifies the regulations governing passport operations. For example, State revised its regulations in 2001 to require that both parents consent to the issuance of a passport for children under age 14 and, in 2004, further amended the regulation to further require that children under age 14 also appear personally when applying for a passport. These changes were made, in part, to improve State’s ability to combat international parental child abduction, but the measures have also helped prevent or deter identity theft-related fraud in passport applications, according to State officials. In commenting on a draft copy of this report, State indicated that these changes were also made to comply with related statutory requirements. We previously reported that the acceptance agent program was a significant fraud vulnerability. State has addressed some weaknesses identified in the training of acceptance agents; the agents serve a critical role in establishing identity, which is critical to preventing the issuance of genuine passports to criminals or terrorists under false identities as a result of receipt of a fraudulent application. However, we found that many of the problems with the oversight of passport acceptance facilities we identified in 2005 persist. Specifically, State lacks an internal control plan for its acceptance facilities to ensure that effective controls are established and monitored regularly. An internal control plan should identify the roles and responsibilities of all individuals whose work affects internal control; lay out specific control areas; cover risk assessment and mitigation planning; and include monitoring and remediation procedures. Moreover, ICAO guidance for the issuance of travel documents recommends several procedures to combat fraudulent applications, including (1) regular training to individuals who accept applications to increase their awareness of potential fraud risks and (2) processes to ensure random access between the acceptance agent and applicant. Numerous passport officials and Diplomatic Security investigators told us that the acceptance agent program remains a significant fraud vulnerability. State passport officials told us there have been investigated fraud cases associated with passport acceptance facilities or the individuals working there; however, they did not provide us with additional information on these cases. Examples of acceptance agent errors that were brought to our attention include important information missing from documentation, such as evidence of birth certificates and parents’ affidavits concerning permission for children to travel, as well as photos that were not properly attached to the application. One passport specialist also cited a case where the photo submitted with the application did not match the identity of the applicant. In another example, another passport official told us of a case where an acceptance facility had accepted a passport application for an individual without the person being present and, therefore, did not verify the applicant’s identity. In addition, managers at two passport offices said their offices often see the same mistakes multiple times from the same acceptance facility. These problems are of particular concern given the persistent attempts to fraudulently obtain legitimate passports using stolen identity documents. Although resources and other tools are available to passport examiners at domestic passport offices to verify citizenship evidence and potentially detect false claims of identity, there are a number of indicators in the inspection of applicants that enhance the ability to detect efforts to use a false identity to obtain a genuine passport. Moreover, passport examiners and other officials at passport offices told us it is easier to detect application fraud when interviewing applicants directly at the passport counter. However, the majority of passport applications that passport examiners adjudicate are accepted by individuals at passport acceptance facilities. State has taken action to address some weaknesses we previously identified with the acceptance facility program. These actions include the following: In mid-2006, State began to develop a system to track the names and signatures of authorized acceptance agents, the training status of these agents, and the level of proficiency achieved in the training. According to State officials, this system is expected to be fully implemented by the end of 2007. In May 2007, State implemented an online training program for use by nonpostal passport acceptance agents. This program was adapted from a computer-based training program previously developed by State and the U.S. Postal Service to train passport acceptance agents at postal service facilities. In the spring of 2007, State began to discuss a system for tracking accepted passport applications by acceptance facility. In addition, State’s Bureau of Consular Affairs is proposing to update and amend some of its passport regulations. These amendments would, among other things, codify the requirement that passport acceptance agents be U.S. citizens, permanent employees, and 18 years or older, and have successfully completed the training as detailed by guidance provided by State. While these requirements are already State policy, the proposed changes would make them formal regulations. In addition, another change would require passport acceptance facilities within the United States to maintain a current listing of all passport acceptance agents. If enforced, these regulations would strengthen the application acceptance process. State officials attribute problems with applications received through acceptance agents to the limited oversight of acceptance agents. For example, accountability for the number of passport agents authorized to accept passport applications remains unclear. Officials at two passport offices confirmed that their passport offices now maintain records of the names of individuals accepting passport applications at designated acceptance facilities in their region. However, they expressed reservations about relying too heavily on the accuracy of this information given the absence of a program to audit or verify the performance of acceptance agents. In addition, State makes a limited number of oversight visits to acceptance facilities. Primarily due to the large number of acceptance facilities in each passport office region, these offices concentrate their training and oversight visits on acceptance facilities geographically close to the passport office or those acceptance facilities identified to have problems. In the absence of a formal mechanism for monitoring the performance of acceptance agents, officials at two of the passport offices we visited had developed individual systems for tracking the passport acceptance facility or agent with an application detected to be fraudulent by passport examiners. GPO and State have measures for ensuring the quality of visas, including BCCs. In recent years, State has taken a number of steps to strengthen the visa issuance process, as well as a more recent measure to secure BCCs. GPO and State have identified measures to ensure the physical security and quality of visas. GPO has measures in place to secure the production of visa foils manufactured by a vendor. GPO approves the vendor’s security control plan and conducts, with State, an on-site inspection of the vendor’s facility prior to the award of the contract and sharing of a detailed description of the security features in the visa design, according to GPO. State receives the blank visa foils directly from the vender using a secured carrier. For the production of BCCs, DHS’s U.S. Citizenship and Immigration Service (USCIS) has established a number of automated checks within the production system to ensure that the cards are produced within specifications. One check is a quality assurance examination of the card to ensure that the photo is clear and the fingerprint image is complete and clear. USCIS has inventory control checks to account for all BCCs and to ensure the information printed onto the BCCs corresponds to the data provided by State. For example, the check would ensure that a male’s photograph is matched with the correct gender identification. Personalized cards are delivered to the U.S. consulates in Mexico, where the cards are checked for accuracy and quality before being delivered to the applicants. State, along with Congress and the Department of Homeland Security have initiated new policies and procedures since the September 11 attacks to strengthen the security of the visa process, particularly as an antiterrorism tool. Such changes include the following: Beginning in fiscal year 2002, State began a 3-year transition to remove visa adjudication functions from consular associates. All nonimmigrant visas must now be adjudicated by commissioned consular officers. Personal interviews are now required for most foreign nationals seeking nonimmigrant visas. As of October 2004, consular officers are required to scan visa applicants’ right and left index fingers through the DHS Automated Biometric Identification System before an applicant can receive a visa. In 2005 we reported that consular officers are receiving clear guidance on the importance of addressing national security concerns through the visa process. In addition, we also reported that State has established clear procedures on visa operations worldwide, as well as management controls to ensure that visas are adjudicated in a consistent manner at each post. State has also increased hiring of consular officers; increased hiring of foreign language proficient Foreign Service officers; revamped consular training with a focus on counterterrorism; strengthened fraud prevention efforts worldwide; and improved consular facilities. In addition, consular officers now have access to more information from intelligence and law enforcement databases when conducting name checks on visa applicants. In addition, in a separate report in 2005, we found that while State’s Bureau of Consular Affairs has a set of internal controls to prevent visa malfeasance, and has taken actions to improve them, these internal controls were not being fully and consistently implemented. State has a program of internal controls for visa issuance detailed in the Foreign Affairs Manual and supplemented by standard operating procedures. Examples include controls to ensure random access between applicants and adjudicators, to minimize the risk of malfeasance; controls for its accountable items; and daily supervisory review of all visa refusals and a sample of visa issuances. As we reported, State has taken a number of steps to strengthen its efforts to protect against malfeasance in the issuance process. For example, to prevent the issuances of nonimmigrant visas to unqualified applicants, Consular Affairs has strengthened its efforts to limit employee access to automated systems that issue visas and has taken steps to ensure that visa applicants cannot predict which officers will interview them. It has also strengthened its criteria for applicants referred by post employees for favorable consideration in obtaining a visa and expedited processing by consular officers. Further, Consular Affairs has increased its emphasis on both headquarters and post supervisory oversight—particularly by ambassadors, deputy chiefs of mission, and principal officers—including by providing training and other tools. It also requires posts to certify in writing annually their compliance with key internal controls. Consular Affairs has issued guidelines on reporting suspicious behavior that may involve malfeasance. It has also enhanced its malfeasance prevention efforts. However, we found some of these controls were not always being followed at the posts we visited in 2005. State officials told us they continue to emphasize the importance of full compliance with internal controls. In addition, State recently took action to secure BCCs in response to the high number of BCCs reported by Mexican citizens as lost or stolen. State officials felt that the ability to obtain another BCC to replace a reportedly lost or stolen BCC was facilitating imposter fraud. In January 2007, State implemented a policy requiring BCC holders who report their BCC stolen to be issued a subsequent BCC in the form of a visa foil inserted in their passport. As of April 2007, about 131,000 BCCs have been issued in the form of a visa foil, according to State. The visa category is marked on the visa foil, indicating the traveler is traveling to the United States under the restrictions of the BCC. State officials told us that the reports of lost or stolen BCCs dropped significantly following the implementation of this initiative. The inspection of U.S. passports and visas at ports of entry is a key element in ensuring the security of these documents. Officers are often faced with limited time to process travelers and rely on both the inspection of select features and their interview of the traveler to detect fraudulent use of passports and visas. Limitations in available technology tools at some ports and a lack of timely and continual information on the security features in these documents affect the inspection officers’ ability to fully utilize the security features in passports and visas. Specifically, primary inspection officers are unable to utilize the chip technology in the e-passport to verify document authenticity because e-passport readers are not available at 83 air ports of entry and are not designated for U.S. citizen inspections at 33 other airports of entry. Further, primary officers are not able to utilize the available fingerprint records of BCC holders to verify the authenticity of the documents and travelers at southern land ports of entry, and they also do not routinely refer BCC holders to secondary inspection, where they do have the capability to utilize the fingerprint records. In addition, limited training materials and training opportunities also impede officers’ ability to learn of the security features and fraudulent trends associated with new and older generations of passports and visas. For example, in advance of State’s issuance of the e-passport and the emergency passport, State did not provide a sufficient quantity of exemplars and CBP did not update its training for all inspection officers to include information on the security features of these new travel documents. Primary officers are often faced with limited time to process travelers— especially at ports that have a continuous high volume of traffic—and rely on both the observation of travelers’ behavior and the examination of travel documents to detect fraudulent use of passports and visas. Specifically, southern land borders face the largest constraints on inspection time due to the high volume of traffic. Many officers at most ports we visited told us they have detected imposters by observing travelers’ demeanor, questioning them regarding their travel, and visually comparing the travelers’ identities with the biographic data and photo on the travel document. These officers told us they make limited use of the security features in travel documents because of time constraints and often rely on behavioral and other indicators to detect fraudulent use of travel documents. For the inspection of the travel document itself, many officers at most ports we visited told us they generally rely on a few security features, such as watermarks and intaglio printing, and will look for signs of alteration; compare the photo and traveler; examine data on the biographic page, such as the expiration date; and examine the look and feel of the document itself to determine whether the passport or visa is valid and is not fraudulently used by an imposter. Primary officers also utilize a variety of tools and technology to assist in their inspection of security features in passports and visas. These include visual inspection tools, such as ultraviolet viewing equipment and handheld magnifying devices, to assist primary officers in identifying signs of alteration and counterfeiting that would not be detected otherwise. Primary officers can also query records of travelers by using the Treasury Enforcement Communications System (TECS)—an interagency database containing lookout information relating to the fraudulent use of travel documents, such as records of U.S. passports reported lost or stolen—or by using DHS’s U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) system to compare visa holders’ biometric records at entry with their records collected at issuance or prior entry. US-VISIT is currently available at primary inspection at 116 air and 15 sea ports of entry, and in the secondary inspection areas at 154 land ports. Many primary officers who have visual inspection tools available told us they utilize these tools to check additional security features when their inspection of the two or three features they typically rely on was not satisfactory. In addition, officers who use the databases to inspect State- issued travel documents told us that access to information on visas issued by State has greatly improved their ability to reliably confirm the validity of visas and detect their fraudulent use. CPB officers at primary inspection are not fully able to exploit the security features in U.S. passports and visas due to the limited availability or use of tools and technology considered critical to ensuring the integrity of travel documents (see fig. 4). As a result, they do not always conduct checks against available records before admitting travelers to the United States. If the tools are not available, and the CBP officer determines additional scrutiny of travelers and documents are necessary, travelers will be referred to the secondary inspections area. At secondary inspections, CBP officers have more time and greater access to inspection-related technologies and equipment and, thus, are more capable of confirming the fraudulent use of U.S. passports and visas identified at primary inspection. Though officers have various tools and technology available to them, the availability and use of equipment to conduct records and identity checks of travelers during primary inspection differ based on whether they arrive at air, sea, or land ports. In addition, these and other critical tools and technology are not consistently used at air, sea, and land ports of entry. For example, although CBP guidance states that visual inspection tools should be used and are extremely valuable for detecting counterfeit or altered passports and visas, CBP has provided tools to many, but not all, primary inspection workstations at air, sea, and land ports of entry. Moreover, use of these tools is a matter of port policy. At the air, sea, and land ports we visited, some officers told us they used these tools consistently, while other officers said they rarely used them. Due in part to the large volume of travelers, primary officers at southern land ports only machine read—access a database that queries travelers’ records—travel documents or manually enter travelers’ biographic data to query records in TECS when deemed appropriate for the inspection situation, given the local traffic flow and traveler wait times. For example, at the southern land border ports we visited, CBP officers stated that currently only about 40 percent of travel documents that are machine readable are actually machine read during primary inspections, although this percentage has been rising in the last several years. CBP supervisor and inspection officers told us that officers are not restricted in their inspection of travel documents and are able to machine read a document should they deem it necessary. In addition, CBP policy requires that all non-BCC visa holders be referred to secondary inspection at land ports of entry, according to CBP. In contrast, CBP told us that officers on the northern border are required to read all machine readable documents, and at air ports, officers consistently query travelers’ records to identify lookout information on U.S. passports and visas. Most travelers presenting BCCs at southern land ports are generally not subject to US-VISIT requirements, although primary inspection officers can refer BCC holders to secondary inspection for US-VISIT screening. However, only a small percentage of travelers with BCCs are referred for a US-VISIT screening (see fig. 5)—in particular, only if a primary officer determines travelers are traveling beyond their geographic limits or exceeding the number of travel days allowed, or if there are concerns about the traveler. Without the use of US-VISIT systems, officers observe and interview travelers and compare the photo and data in the BCC with the bearer of the document, but do not have the benefit of looking for discrepancies between the information provided by the travelers and the fingerprint data in the system. CBP officials stated there are no current plans to expand the use of biometric checks on travelers presenting BCCs due, in part, to concerns about extending the inspections processing time at primary inspection and space constraints at land ports. However, CBP acknowledges the use of biometric checks of travelers presenting BCCs, when available, provides additional verification that travel documents are valid and belong to travelers presenting the documents, helping to address imposter fraud—the most significant type of fraud associated with BCCs. CBP officers intercepted nearly 16,000 BCCs used by imposters in fiscal year 2006. In addition, DHS is not fully exploiting at primary inspection a key security feature of the new U.S. e-passport—the chip. Specifically, because DHS has not fully deployed e-passport readers at all primary inspection areas, officers cannot routinely read and authenticate the chip in e-passports, which would better enable officers to detect many forms of passport fraud, including photo substitution and imposters. Without an e-passport reader, inspection officers do not have the benefit of comparing the traveler with the photograph and biographic information stored in the RFID chip of the e-passport. DHS deployed, in response to a legislative requirement, a total of 212 of these readers for use on foreign e-passports at 33 out of 116 air ports of entry. These 33 air ports were chosen because they process the largest volume of travelers—about 97 percent—from Visa Waiver Program countries. The remaining readers are used for training purposes. While the same e-passport readers may also be used to read the chip in U.S. e-passports, U.S. citizens are primarily processed through specific lanes at these air ports that are not equipped with e-passport readers. CBP has no schedule to install e-passport readers to primary inspection lanes for U.S. citizens at air ports or to install e-passport readers at sea and land ports of entry. CBP has also not defined the specific conditions that should be in place to expand the deployment of e-passport readers to additional ports. CBP officials indicated they intend to install e-passport readers at additional ports in the future. These officials noted several factors for why they have not installed additional readers or developed a planned schedule to do so, including the need for additional funding and advancements in the software technology for the readers. CBP officials stated that further funding would have to be allocated to expand the deployment of e-passport readers at air, land, and sea ports of entry and that a request has been made to DHS to include additional funding in the agency’s fiscal year 2009 budget. As of June 2007, CBP has been unable to provide additional information on the details of this budget request. CBP officials also said that due to the current software, the new e-passport readers are slower than current inspection machines and could possibly extend the inspection-processing time for U.S. citizens—negatively affecting land ports already experiencing extensive wait times for inspections. In addition, the e-passport reader software is currently not programmed to validate e-passports’ digital signatures, which ensure the data stored on the RFID chip are authored by an issuing authority—the State Department in the case of U.S. e-passports—and have not been altered. Although DHS officials stated the current reading of the chip in foreign and U.S. e-passports does not fully verify the digital signatures, State and DHS are drafting a memorandum of understanding to govern interagency use of a validation service—DHS’s e-passport Validation Service and Repository— to verify the integrity and validity of electronically stored data on e-passports received at ports of entry. Once a CBP officer at primary inspection intercepts passports or visas that are suspected of being fraudulently used or counterfeit, secondary inspection officers have more time to question travelers, review the validity and authenticity of travel documents, and conduct database checks to confirm the travelers’ identities. In addition, officers conducting secondary inspection are more experienced and trained to use support not available at primary inspection, such as tools and equipment for forensic examination of suspected fraudulent U.S. passports and visas, and additional forensic support and intelligence information from outside sources, such as DHS’s Forensic Document Laboratory (FDL) (see fig. 6). Officers at secondary inspection have access to more tools and equipment and more time with which to examine the security features of suspected fraudulent travel documents than do officers at primary inspection. For example, secondary inspection areas generally have a variety of magnifying devices and microscopes to detect data alterations, photo and page substitutions in passports, and altered or counterfeit visas. In addition, secondary officers generally have access to high-intensity light devices, which allow for the inspection of certain paper disturbances often caused by erasures, for example. Recently, some ports have received a laboratory workstation that secondary officers can use to examine questionable travel documents under different types of lighting and at various magnifications. CBP officers in secondary inspection also have access to additional databases to confirm travelers’ identities and verify the authenticity of U.S. passports and visas. For example, secondary officers have access to databases containing lookout information and travelers’ biometric data, including photographs and fingerprints. In addition, secondary officers have access to State’s databases to confirm data on nonimmigrant visa and passport issuance; State’s Consular Consolidated Database (CCD), which stores information about visa applications, issuances, and refusals; and State’s Passport Information Electronic Retrieval System (PIERS), which provides similar data on passport issuance to confirm the identity and authenticity of U.S. passports. During secondary inspections, officers can seek support outside the port to assist in confirming travel document fraud. For example, DHS’s FDL provides forensic document analysis and law enforcement support services to secondary officers in real time, 7 days a week. Some ports are equipped with photophones to transmit images of documents to FDL experts for verification of altered and counterfeit U.S. passports and visas, and secondary officers can forward suspected fraudulent U.S. passports and visas to FDL experts for a thorough forensic examination. In addition, to inform officers of fraudulent trends concerning travel documents, secondary inspection areas maintain archived intelligence information from a number of sources, including FDL, State, and intelligence officers at the port, that details how U.S. passports and visas have been fraudulently used in the past. Although CBP officers are responsible for reviewing alerts on the fraudulent use of U.S. passports and visas they receive by e-mail and daily briefings, secondary inspection areas maintain hard copies of intelligence information on file for officers to review, as needed. In addition, archived alerts are also available through electronic databases, such as the DHS’s intranet, which officers can choose to access in the secondary inspection area. CBP did not update its training program for all officers to include information on the security features in the e-passport before State began issuing this travel document. Between the summer of 2006 and March 2007, State provided exemplars—genuine documents for training purposes—of the e-passport to a variety of entities, including its U.S. missions overseas, foreign governments, FDL, and DHS, according to State documentation. We found that CBP was not provided with exemplars prior to issuance of the new e-passport. Although State began issuing e- passports as early as December 2005, CBP was not provided with e- passport exemplars until March 2007, according to State documentation. According to CBP officials, training on the features of the new e-passport was not provided to officers at basic training until April 2007. In addition, CBP has not provided formal training utilizing e-passport exemplars to officers at all ports of entry, although training with e-passport exemplars was provided to officers at the 33 airports where e-passport readers were installed, according to CBP officials. State officials explained that preparing exemplars is a time-consuming process and that meeting production demands limited the supply of document exemplars. Therefore, according to State, exemplars were provided only to FDL and foreign embassies located in Washington, D.C., prior to the issuance of the e-passport. To provide information on the features of the new documents, FDL prepared an alert for CBP and other law enforcement entities outlining the details of the security features in the e-passport and new emergency passport. Without an official exemplar, a CBP training officer at one port we visited used his own e-passport to provide officers training on the security features of the e-passport. This training officer stated that while he used the FDL alert to train officers, use of the alert alone does not provide officers with an understanding of the look and feel of the actual document. In addition, CBP officers at several ports we visited stated they had inspected e-passports but were not aware of how the security features of the e-passport differed from previous generations and how changes to the security features addressed the types of fraudulent attacks commonly committed against older generations of passports. Given evolving fraud trends and the quality of attempts to alter passports and visas, ensuring officers are properly trained to recognize the fraudulent use of these travel documents is essential. Training officers at most of the ports we visited identified the importance of continual training on the security features and evolving fraudulent trends related to all generations of valid passports and visas; however, the extent to which mandatory training is supplemented by refresher training in the subject varied among these ports. For example, two ports we visited provide continual training on fraudulent document detection to all officers yearly, while other ports provided refresher training less frequently. While CBP requires officers to complete courses that include segments in fraudulent document detection relating to passports and visas, CBP officials stated there is currently no program in place to ensure officers receive such training continually. Some senior officers at some of the ports we visited stated they had not been retrained on the security features of passports and visas and fraudulent document detection since basic training. CBP officials explained that the need to balance officers’ inspections responsibilities with training limits training opportunities. At most of the ports we visited, port officials explained there is not enough time to provide all officers with additional training on the security features of valid generations of passports and visas due to inspection priorities and limited staffing at ports. To provide greater opportunities for continual document examination training relating to passports and visas, many ports we visited undertook their own training initiatives. For example, four of the ports added segments on fraudulent document detection to mandated courses that did not already include such information. In addition, based on training developed in the field, CBP developed a Web-based course on the security features of visas. Officials at ports undertaking these initiatives said they realized that without continual training, officers often felt less prepared to understand and recognize security features and fraudulent trends. They stated that because passports and visas could remain valid for 10 years, fraudulent attacks committed against older generations of these travel documents often recur, and officers should be reminded of these fraud trends through continuing training. In addition, to identify and adopt best practices on fraudulent document detection training, CBP held a forum in January 2005 that led to the development of a nationwide training effort requiring supplemental training on fraudulent document detection at all ports. However, CBP does not have any plans to hold such forums in the near future, and while CBP encourages ports to adopt initiatives to improve the delivery of refresher training on the examination of passports and visas, it is often not possible to mandate initiatives that are appropriate for all ports because ports differ in the types of fraudulent travel documents they encounter. Ensuring the integrity of passports and visas is an essential part of border security requiring continual vigilance and new initiatives to stay ahead of those seeking to enter the United States illegally. Preventing the fraudulent use of travel documents requires a combination of enhanced document features, solid issuance measures, and an inspection process that utilizes the security features of these documents. A well-designed document has limited utility if it is not well-produced or the inspection does not utilize the available security features to detect attempts to falsely enter the United States. State has added technical features and security techniques to the design and production of these documents that make it much harder to counterfeit or alter new generations of passports and visas. Nonetheless, older documents have been fraudulently used. Further, counterfeit and alteration threats to the security of these documents are always changing, requiring regular reassessments of the security features in the documents’ design. In addition, because it takes several years to address a vulnerability that has been identified in a document’s design, a structured process for reassessing the features and planning for new generations of these documents is critical. State has also strengthened the issuance process for visas and passports. Despite some improvements, however, the passport issuance process remains vulnerable, especially at the application acceptance stage, where oversight of the thousands of acceptance facilities—responsible for verifying the identity of applicants— remains weak. Finally, many CBP inspectors at U.S. ports of entry face time constraints in processing large volumes of people and therefore rely on a few visual and tactile security features of passports and visas, in addition to their interviews, to identify fraudulent use of these documents. Moreover, CBP officers are unable to take full advantage of the improved technical and security features in passports and visas because of insufficient training and uneven access to equipment. While it would not be possible to remove all risks inherent in issuing and inspecting travel documents, or to foresee all evolving counterfeit and alteration threats, we believe that more systematic testing, planning, oversight, and data analysis practices could enhance border security. We are recommending that the Secretary of State take the following two actions to improve the integrity of its travel documents. Develop a process and schedule for periodically reassessing the security features and planning the redesign of its travel documents. Establish a comprehensive oversight program of passport acceptance facilities. In doing so, State should consider conducting performance audits of acceptance facilities, agents, and accepted applications and establishing an appropriate system of internal controls over the acceptance facilities. We are also recommending that the Secretary of Homeland Security take the following two actions to more fully utilize the security features of passports and visas. Develop a deployment schedule for providing sufficient e-passport readers to U.S. ports of entry, which would enable inspection officials to better utilize the security features in the new U.S. e-passport. Develop a strategy for better utilizing the biometric features of BCCs in the inspection process to reduce the risk of imposter fraud. Finally, we are recommending that the Secretaries of State and Homeland Security collaborate to provide CBP inspection officers with better training for the inspection of travel documents issued by the State department, to better utilize the security features. This training should include training materials that reflect changes to State-issued travel documents in advance of State’s issuance of these documents, including the provision of exemplars of new versions of these documents in advance of issuance. We provided draft copies of this report to the Secretaries of State and Homeland Security and to the U.S. Public Printer at the Government Printing Office for review and comment. We also provided a draft copy to the Department of Commerce’s National Institute of Standards and Technology. We received written comments from State and DHS, which are reprinted in appendixes VI and VII, respectively. State, DHS, GPO and NIST provided technical comments which we have incorporated in the report, as appropriate. State and DHS concurred with the findings and recommendations of the report. State agreed with our recommendations and described the actions it is taking and plans to take to implement them. State also provided additional information on the Consular Consolidated Database (CCD), recent visa fraud cases, and the ways in which State identifies fraudulent passports and visas. DHS concurred with our recommendations and described the actions it is taking and plans to take to implement them. DHS believes it has already implemented our recommendation that it develop a strategy for better utilizing the biometric features of BCCs in the inspection process. We agree that DHS’s US-VISIT capability enables primary inspectors at air and some sea ports of entry to use fingerprint biometrics to compare and authenticate the document and holder of visas and BCCs. However, at land border ports this capability is not available in primary inspection. Furthermore, travelers with BCCs at southern land border ports—the ports where BCC imposter fraud is most significant— are not routinely referred to secondary inspection, where they do have the capability to utilize the fingerprint records for comparison, and all BCCs are not machine-read for access to the biographic data during inspection at these ports of entry. As a result, inspectors are not making full use of the biometric information available for BCCs. To more fully utilize the available fingerprint biometric in the BCC and mitigate imposter fraud, we are suggesting that DHS develop a strategy to better use both fingerprint biometric of the BCC and increase card reads of the BCC in primary inspection at southern land border ports of entry. We are sending copies of this report to the Secretaries of State and Homeland Security, the U.S. Public Printer at the Government Printing Office, as well as the Director of the National Institute of Standards and Technology. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix VIII. To examine the features in passports and visas, we reviewed relevant documentation, including materials on the security features, available counterfeit deterrence and durability studies, fraud bulletins and alerts, and regulations. We also interviewed officials at Department of State’s Consular Affairs Bureau, Department of Homeland Security’s (DHS) Forensic Document Laboratory (FDL), the Department of Commerce’s National Institute of Standards and Technology (NIST), and the Government Printing Office (GPO). To identify required and recommended standards for international travel documents, we reviewed documentation from the International Civil Aviation Organization and attended the organization’s machine-readable travel document symposium in Montreal, Canada. To identify the process for addressing potential risks, we reviewed documentation and interviewed officials at State’s Consular Affairs Bureau, FDL, and NIST. To identify how State obtains, analyzes, and shares information on the features and fraudulent use of these documents, we reviewed relevant documentation, including fraud bulletins and alerts, and met with State officials from the Diplomatic Security and Consular Affairs Bureaus, including the fraud prevention units of passport and visa services, as well as with DHS officials from CBP and FDL. To examine the integrity of the issuance process for these documents, we reviewed relevant documentation, including reports and audits of internal controls and production and issuance procedures, and interviewed officials at State’s Consular Affairs Bureau. We also conducted site visits and interviewed officials at seven domestic passport offices and two U.S. consulates in Mexico. To examine how passport fraud is committed during the issuance process, we reviewed State Department Bureau of Diplomatic Security and Bureau of Consular Affairs statistics on passport fraud. We also met with officials at State’s Diplomatic Security Headquarters Criminal Division and at Diplomatic Security’s Field Offices in Los Angeles, Seattle, Miami, Chicago, Boston, and Portsmouth, New Hampshire. We visited State’s passport-issuing offices in Los Angeles, Seattle, Miami, Chicago, Boston, Portsmouth, and Washington, D.C. We chose the Portsmouth office because it is one of the two passport “megacenters” responsible for adjudicating applications from other regions. We chose these locations to gain an appropriate mix of geographic coverage, workload, and levels and types of passport fraud. We did not select these locations to be generalizable to all passport offices, but rather to obtain an appropriate mix of geographic coverage and workload. We analyzed fraud referral statistics from State’s office of passport services and Diplomatic Security for fiscal years 2002 through 2006. Together with passport services officials, we identified the methods used to capture and compile the data and determined that the data were sufficiently reliable and generally usable for the purposes of our study. At five of the seven offices we visited, we conducted interviews with various officials and interviewed passport examiners chosen by office management, although we provided input into the selection of examiners and interviewed these individuals without the presence of management. We also met with Diplomatic Security agents attached to field offices responsible for investigating fraud suspected at the offices we visited. In addition, we interviewed relevant State officials at Passport Services, Diplomatic Security, and the Office of the Inspector General. To examine the measures taken to ensure the integrity of blank passports, we visited the GPO production facilities in Washington, D.C., and observed the production of blank passports; interviewed relevant GPO and GPO Inspector General officials about the measure taken throughout the production and delivery processes; and reviewed GPO Inspector General reports on audits of the security aspects of blank passport production and transportation. To examine the measures that have been taken to strengthen the issuance process for visas, we reviewed past GAO reports and interviewed State officials in the Visa Office. To identify the measures taken to ensure the integrity of blank visa foils prior to delivery to State custody, we interviewed GPO officials and reviewed relevant GPO documents. To examine the measures taken to ensure the integrity of the border crossing card (BCC), we visited two production facilities in Vermont and Nebraska where BCCs are produced. We interviewed production and management staff at both of these facilities. We identified and reviewed past GAO and Inspector General reports on the internal controls and audits in place for the visas process. For BCCs, we identified the internal controls and measures that differ from the normal visa process, but did not assess compliance with these controls. To examine the inspection measures and processes for travel documents issued by the State Department at U.S. ports of entry, we reviewed relevant documentation and interviewed officials at DHS’s U.S. Customs and Border Protection (CBP), FDL, and U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program, and conducted site visits to ports of entry. We reviewed CBP inspections program policies, procedures, and related memorandums and relevant laws and regulations. At headquarters, we met with CBP officials responsible for field operations, information technology, training and development, intelligence, and information technology. We also interviewed officials from the Federal Law Enforcement Training Center and FDL about issues relating to document examination training, and we discussed with FDL officials the types of forensic document analysis and operational support services provided to CBP. In addition, we interviewed US-VISIT officials and reviewed relevant documentation on the deployment and use of inspections-related technologies. To observe inspections processes and measures, we conducted site visits to nine U.S. ports of entry. Due to differences in travel document inspections processes and measures among air, sea, and land ports, we selected three ports of each type. Air ports of entry included Chicago O’Hare, Dallas/Fort Worth, and Miami. Sea ports of entry included Los Angeles/Long Beach, Miami, and Port Everglades, Florida. Land ports of entry included Laredo, Texas; Limestone/Houlton, Maine; and San Ysidro, California. We selected a nongeneralizable sample of air, sea, and land ports that ensured we included a range of the characteristics that can cause variation in the inspections process. Using CBP inspections program performance data, we selected ports that had high and medium levels of fraudulent documents, based on the total and average number of fraudulent travel documents intercepted, and the ratio of total travelers inspected to total fraudulent documents intercepted for fiscal years 2000 through 2005. We also selected ports based on a geographic mix, to include land ports on the Mexican and Canadian border, and a mix of ports in the northern, eastern, southern, and western regions of the United States. At each of these ports we met with port directors, CBP officers responsible for intelligence information and training, observed CBP officers conducting primary inspections, and reviewed procedures and the equipment available in primary and secondary inspection areas to examine State Department-issued travel documents. At some ports, no travelers were referred for secondary inspections for the fraudulent use of State Department-issued travel documents at the time we observed inspections; however, CBP officers provided us with an overview of secondary inspection procedures and resources. In addition to the nine ports of entry we selected, we conducted preliminary site visits to the Nogales, Arizona, land port of entry, and the Los Angeles and Washington Dulles air ports of entry. During these preliminary site visits, we observed primary and secondary inspection processes and equipment and interviewed CBP officials. We conducted our work from June 2006 through May 2007 in accordance with generally accepted government auditing standards. State issues four types of passports: tourist, official, diplomatic, and emergency. A tourist passport, for individuals 16 years or older, is valid for 10 years from the date of issuance; it is valid for 5 years for younger applicants. An official passport, for federal employees traveling on official government business, is valid for 5 years from the date of issuance. A diplomatic passport, for government officials with diplomatic status, is valid for 5 years from the date of issuance. An emergency passport, for individuals overseas who no longer possess a valid passport, may be valid for up to 1 year or, in cases of repatriations, limited to direct return to the United States. In conjunction with the rollout of the new e-passport, State also began issuing a new emergency passport in August 2006, representing the first time that U.S. embassies and consulates issued a standard-design emergency passport. Prior to the emergency passport, U.S. embassies and consulates used the 1994 or earlier versions to issue a passport for emergency purposes. The emergency passport resembles the e-passport except that it is personalized using a foil that is stuck in the book in a manner similar to a visa foil. The passport card is expected to be valid for 10 years from the date of issuance for individuals 16 years or older and valid for 5 years for younger applicants. State plans to issue the passport card in 2008. To test the passport design, State requested expertise from FDL and NIST. Specifically, FDL conducted counterfeit deterrence studies on the security features of the diplomatic and tourist e-passport in 2005. FDL had conducted similar studies for State in the past on the 1998 tourist passport. In addition, State asked FDL to test the physical security of the e-passport using the diplomatic e-passport. Results of FDL tests were incorporated into the design prior to the issuance of the tourist passport. NIST also conducted tests, such as durability testing, to evaluate the technical merits of passport books and to inform GPO and State’s decisions for awarding contracts to suppliers. While there is a provision in the awarded contracts to conduct long-term durability testing, NIST has not been asked to provide these tests. In addition, in response to security and privacy concerns, NIST was requested to evaluate the vulnerability of the e-passport chip to remote access by an unauthorized party. Additional tests were also conducted at airports to assess the performance of the new e-passport in an actual inspection environment. For example, tests were conducted with airlines in which holders of U.S. diplomatic and official e-passports presented their e-passports for inspection when arriving in the United States at select airports. These tests were conducted to gather information on the accuracy and speed in reading the chip to support the development and implementation of the e-passport. State incorporated the results from the tests to improve the design. Once a passport application has been received by one of the 17 domestic passport-issuing offices, each application must be examined by a passport examiner who determines, through a process called adjudication, whether the applicant should be issued a passport. Adjudication requires the examiner to scrutinize identification and citizenship documents presented by applicants to verify their identity and U.S. citizenship. When examiners detect potentially fraudulent passport applications, they refer the applications to their local fraud prevention program for review, with potential referral to State’s Bureau of Diplomatic Security for further investigation. Once an applicant has been determined eligible for a passport by a passport examiner, the passport is personalized with the applicant’s information at one of the domestic passport-issuing offices or the production facility and then delivered to the applicant. For an overview of the passport process, see figure 7. DHS is responsible for establishing visa policy, reviewing implementation of the policy, providing additional direction, and reviewing petitions for immigration. State manages the visa process, as well as the consular corps and its functions at 219 visa-issuing posts overseas, and provides guidance, in consultation with DHS, to consular officers regarding visa policies and procedures. GPO overseas the production of blank visa foils. Visas foils are personalized at posts overseas with the applicant’s personal information, attached to the foreign passport, and delivered to the applicant. DHS’s U.S. Citizenship and Immigration Services produces and personalizes BCCs once an applicant has been determined eligible by a consular officer and delivers the cards to State for distribution by the U.S. mission in Mexico. For an overview of the visa process, see figure 8. The primary inspection process for passports and visas varies at air, sea, and land ports of entry due to differences in ports’ environments and the risk each type of port faces with regard to fraudulent travel documents. In addition, the mode of travel and how travelers bearing passports and visas enter dictate the primary inspection procedures. For example, while at each port, primary officers question travelers regarding their identity and purpose of travel, and examine their passports or visas, the availability and use of equipment to conduct identity and records checks of travelers during primary inspection differ based on whether travelers arrive by plane, sea vessel, vehicle, or on foot. If the primary officer determines that further review is needed, the officer will refer the traveler to secondary inspection. In secondary inspection, an officer makes a final determination to admit the traveler or deny admission for reasons such as the presentation of a fraudulent or counterfeit passport or visa. Once a CBP officer in secondary inspection has determined a document is fraudulent or is being presented by a traveler other than the rightful holder, the officer processes the traveler as inadmissible and ensures that information about the document is distributed promptly. Information about seized fraudulent and counterfeit passports and visas is regarded as possible intelligence that may have a connection to other criminal activities and national security concerns, such as terrorism. See figure 9 for an overview of the inspection process at U.S. ports of entry. Air Ports of Entry: Prior to travelers’ arrival, for flights to the United States, commercial airlines are required to submit passenger and crew manifests containing first and last names, dates of birth, nationalities and passport numbers to CBP through the Advanced Passenger Information System (APIS). With information from APIS, CBP officers conduct queries of lookout records for each traveler in the Treasury Enforcement Communications System (TECS). TECS queries provide officers with lookout information on travelers, including alerts of lost and stolen travel documents that may be used fraudulently. In addition, primary officers query records of U.S. visas to verify the State Department’s visa information. For a traveler with a U.S. nonimmigrant visa subject to processing in DHS’s U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) systems, primary officers collect scans of the traveler’s two fingerprints (the right and left index fingers) and take a digital photograph of the traveler. The computer system compares the two fingerprints against existing records collected at issuance to confirm that the traveler is the person to whom State issued the U.S. visa. Sea Ports of Entry: At sea ports of entry, commercial carriers are required to submit passenger manifests to CBP through APIS prior to docking, and CBP officers analyze TECS data using APIS to identify passengers requiring further examination when they enter the United States. Some seaports have automated terminals with computer systems equipped with TECS and US-VISIT systems, and the inspections process is similar to that of an air port. Other sea ports have nonautomated terminals that are not equipped with computer systems. At these terminals, primary inspections occur onboard or dockside, where officers rely on the advanced TECS checks and do not conduct US-VISIT biometric checks. Officers at the sea ports we visited stated the risk of travelers presenting fraudulent travel documents at seaports is not as significant as at air and land ports of entry, as most cruise ship passengers begin and end their trips in the United States, and crew members often make several entries and are inspected each time. Land Ports of Entry: CBP has established procedures to inspect travelers expeditiously at land ports due to the large volume of travelers arriving on foot and in vehicles at land ports of entry—more than 85 percent of all entries into the United States. Primary officers perform pedestrian and vehicle inspections usually with no advanced passenger information and do not consistently conduct record checks in TECS. In addition, primary inspection procedures differ for pedestrians and vehicles. For pedestrians, if TECS is available, the traveler’s name can be machine read from the travel document or manually keyed in by the primary officer. For vehicles, officers frequently inspect multiple travelers entering in a single vehicle, and the TECS queries are conducted primarily on the vehicle data to refer the vehicle and travelers for secondary inspection. Documents and names of the vehicles’ occupants are generally checked randomly or when the officer suspects something is wrong. In addition, travelers with nonimmigrant visas or border crossing cards requiring additional US-VISIT processing are sent to secondary inspections areas. In general, at land ports, officers rely on visual observation, interviewing skills, and a quick check of document security features and facial identification to identify imposters and determine secondary referrals. The following are GAO’s comments on the Department of State’s letter dated July 24, 2007. 1. We believe there is value in a mandated regular review procedure for document integrity and recommend that State develop a process and schedule for periodically reassessing security features in the design of its travel documents. We recognize that an informal process is important for responding to vulnerabilities and counterfeit or altered passports and visas, as they are discovered. It is not our intention to inhibit or replace the informal process already in place. However, we believe that an informal process by itself is not an effective way to re- evaluate the security features of passports and visas against evolving counterfeit and alteration threats. While State has made adjustments in the design of passports and visas, its approach has been largely reactive. A structured process for reassessing the features and planning for new generations of passports and visas is critical because counterfeit and alteration threats to the security of these documents are always changing, many passports and visas have a long lifespan, and it takes State several years to fully implement a new document design. The increasing pace of technology change and use of electronics makes State’s current approach less viable than it might have been in the past, and best practices, such as for currency design, suggest that periodic evaluation of designs and introduction of new security features are more viable approaches in the management of counterfeit and alteration threats. We welcome State’s recent steps to develop a schedule for periodically reassessing security features in the design of passports and visas. 2. We welcome State’s recent steps to hire an analyst to design and implement a comprehensive program for the oversight of passport acceptance agents. During the course of our review, we were informed that the acceptance agent program remained a significant fraud concern and that efforts were under way to implement actions to address identified vulnerabilities in this program. However, State officials were unable to provide us with documentation identifying these vulnerabilities or a plan for addressing them. After our draft report was provided to the agency for review and comment, we were provided with a draft document identifying initiatives to improve oversight of the passport acceptance agent program. State has identified the vulnerabilities in this program and proposed reasonable oversight measures to address these vulnerabilities. 3. We revised the text of this report to reflect this information. 4. During the course of our review, we were informed by Consular Affairs officials in the Office of Fraud Prevention Programs that State did not have a formal process for reassessing the security features in visas or for planning the redesign of the documents in the future. According to these officials, formal plans for redesigning the BCC did not exist, although they did indicate that State was considering the use of the new passport card design to develop the next BCC. We welcome the decision to model the new BCC after the passport card design and issue the next generation of this card in 2008, when the current cards will begin to expire. Furthermore, we believe it is important to periodically reassess the security features in the design of the new BCC to manage future counterfeit and alteration threats. The following are GAO’s comments on the Department of Homeland Security’s letter dated July 20, 2007. 1. We agree that DHS’s US-VISIT capability enables primary inspectors at air and some sea ports of entry to use the fingerprint biometric to compare and authenticate the document and holder of visas and BCCs. However, at land border ports this capability is not available in primary inspection. Travelers with BCCs at southern land border ports—the ports where BCC imposter fraud is most significant—are not routinely referred to secondary inspection, where they do have the capability to utilize the fingerprint records for comparison. In addition, at southern land border ports, all BCCs are not machine-read for access to the biographic data and photo during primary inspection and vehicle lanes do not have the capability to access the photograph for comparison. As a result, inspectors are not making full use of the biometric information available for BCCs. To more fully utilize the available fingerprint biometric in the BCC and mitigate imposter fraud, we are suggesting that DHS develop a strategy to better use both fingerprint biometric of the BCC and increase card reads of the BCC in primary inspection at southern land border ports of entry. In addition to the contact named above, John Brummet (Assistant Director), Claude Adrien, Monica Brym, Joe Carney, Richard Hung, and Bradley Hunt made key contributions to this report. Technical assistance was provided by Kate Brentzel, Aniruddha Dasgupta, Etana Finkler, Elisabeth Helmer, Sona Kalapura, Chris Martin, Jose Pena, and Marisela Perez.
Travel documents are often used fraudulently in attempts to enter the United States. The integrity of U.S. passports and visas depends on the combination of well-designed security features and solid issuance and inspection processes. GAO was asked to examine (1) the features of U.S. passports and visas and how information on the features is shared; (2) the integrity of the issuance process for these documents; and (3) how these documents are inspected at U.S. ports of entry. We reviewed documents such as studies, alerts, and training materials. We met with officials from the Departments of State, Homeland Security, and Commerce's National Institute of Standards and Technology, and U.S. Government Printing Office, and with officials at seven passport offices, nine U.S. ports of entry, two U.S. consulates in Mexico, and two Border Crossing Card production facilities. The Department of State (State) has developed passports and visas, including border crossing cards (BCC), that are more secure than older versions of these documents; however, older versions have been fraudulently used and remain more vulnerable to fraud during their lifespan. For example, earlier versions valid until 2011, of which there are more than 20 million in circulation, remain vulnerable to fraudulent alteration by such means as photo substitution. Although State has updated or changed the security features of its travel documents, State does not have a structured process to periodically reassess the effectiveness of the security features in its documents against evolving threats and to actively plan for new generations. State has taken a number of measures to ensure the security and quality of passports and visas, including establishing internal control standards and quality assurance measures, training of acceptance agents, and initiating new visa policies and procedures. However, additional measures are needed in the passport issuance process to minimize the risk of fraud. State lacks a program for oversight of the thousands of passport acceptance facilities that serve an important function in verifying the identity of millions of passport applicants each year. Officers in primary inspection--the first and most critical opportunity to identify fraudulent travel documents at U.S. ports of entry--are unable to take full advantage of the security features in passports and visas. These officers rely on both their observations of travelers and visual and manual examination of documents to detect fraudulent documents. However, the Department of Homeland Security (DHS) has not yet provided most ports of entry with the technology tools to read the new electronic passports and does not have a process in place for primary inspectors to utilize fingerprints collected for visas, including BCCs, at all land ports of entry. Moreover, DHS has provided little regular training to update its officers on the security features and fraud trends in passports and visas.
The EITC was enacted in 1975 and was originally intended to offset the burden of Social Security taxes and provide a work incentive for low- income taxpayers. It is a refundable federal income tax credit, meaning that qualifying working taxpayers may receive a refund greater than the amount of income tax they paid for the year. For tax year 2006, the maximum amount of EITC a taxpayer could receive was $4,536. Beginning in 1979, individuals could elect to receive the EITC in advance payments from their employer during the year along with their regular pay. One purpose of the advance payment is to provide employees with an immediate reward for their work effort rather than forcing them to postpone receiving the credit until they file their tax returns. To get the credit, at any time during the year, an employee would complete the Form W-5 and provide it to his or her employer. Upon receiving a completed Form W-5, the employer calculates the amount of the AEITC payment to include with the employee’s pay by considering (1) the employee’s wages, (2) whether the employee is married or single, and (3) if married, whether the employee’s spouse has a Form W-5 in effect with an employer. The AEITC payment to the employee is considered to be equivalent to the employer making a payment to IRS for employee income tax withholding and employee and employer Social Security and Medicare tax. When employers file their quarterly tax returns, they show the total payments made to employees on the AEITC payment line on Form 941, “Employer’s Quarterly Federal Tax Return.” This amount is then subtracted from the total amount of tax the employer owes. At the end of the calendar year, the employer indicates the total AEITC payments the employee received on the employee’s Form W-2. Employees are then required to report this amount either on their Form 1040 or Form 1040A tax return. Assuming the employee qualified for the EITC, the AEITC amount received should be reported on the tax return as other taxes, which, in effect, subtracts the amount received from the total amount of any EITC. If the employee did not qualify for the EITC, he or she is still required to file a tax return, regardless of income level, and the AEITC amount paid is added to any taxes owed. Figure 1 illustrates this process and notes the major forms involved. An individual must meet certain requirements to qualify for the AEITC. Specifically, an individual must expect to (1) be able to claim the EITC for the current year (EITC requirements for 2006 are shown in table 1), (2) have at least one qualifying child, and (3) have earned income and adjusted gross income below a certain amount for that year. There are additional requirements that an individual may only have one Form W-5 in effect at a time and that he or she informs their employer if their spouse also has a Form W-5 in effect. AEITC recipients can receive up to 60 percent of the EITC benefits for one qualifying child. The maximum AEITC amount for 2006 was $1,648. A change in an individual’s personal circumstances after submitting the Form W-5 could affect their eligibility for both the EITC and the AEITC. For example, an individual who received the AEITC, separated from his or her spouse during the year, and used the married filing separately filing status would not be eligible for the AEITC (or EITC). In such cases, when the individual files their tax return and reports the AEITC amount received, the amount would be added to any taxes due or subtracted from any refund. About 3 percent of the EITC recipients potentially eligible for the advance, about 514,000 individuals on average, elected it in each year, tax years 2002 through 2004, according to data employers reported on the Form W-2. As shown in figure 2, about 21 million taxpayers received the EITC each year and of these recipients, about 17 million were eligible for the AEITC. In total, AEITC recipients received an average of about $146 million in AEITC for each tax year from 2002 through 2004. Yet, those who elected it often received relatively few dollars from their employers. As table 2 indicates, about half of all individuals who got AEITC received $100 or less each year and about 75 percent received $500 or less. Even at $100 per year, this equates to about $8 a month, or $4 every 2 weeks, and even at $500 per year, this equates to about $42 a month or $19 every 2 weeks. The amounts most individuals received were significantly less than the yearly maximum, and were consistent for the 3 years we reviewed. The number of individuals who have elected the AEITC has remained low for many years. For example, between 1990 and 1997, AEITC use never exceeded 2 percent of qualifying EITC recipients. IRS calculated use based on tax returns showing receipt of the AEITC divided by the EITC population that reported at least one qualifying child. Using this same methodology, use for tax years 2002 through 2004 was relatively the same at an average of 0.8 percent. Our figures for 2002 through 2004 are higher than these prior AEITC figures because our figures are based upon the Forms W-2 that reported AEITC (see fig. 2). Historically, AEITC use has been based upon the number of individual federal tax returns that reported an amount on the AEITC line. The historic method excludes individuals who did not file a federal tax return and individuals who filed a federal tax return but did not report the AEITC. Additional demographic data about individuals who elected the AEITC are included in appendix III. These additional data represent new analysis that has not been previously available, including each recipient’s filing method, age, and gender, and each employer’s size. Use of the AEITC has remained low for many years despite several targeted efforts to increase it. There have been several federal efforts targeted to increase AEITC use over the last approximately 15 years, including both legislative and administrative changes. (A full description of these changes is included in app. IV.) One significant piece of legislation was OBRA ’93, which involved the AEITC in two ways: it (1) reduced the maximum amount of EITC an individual could receive in advance and (2) required IRS to conduct outreach directly to potentially eligible AEITC recipients. First, OBRA ’93 reduced the AEITC maximum from 100 percent to 60 percent of the maximum credit available to a taxpayer with one qualifying child. This change was made to improve compliance and lessen concerns that recipients would owe the difference when filing their federal tax return, which was thought to discourage AEITC use. Second, OBRA ’93 directed IRS to send notices to taxpayers who were likely to be eligible for the AEITC for 2 years and directed the Secretary of the Treasury to study the effect of the notice program on AEITC use. Only some information is available about the first notice mailing, which occurred in 1994. For the first notice mailing, IRS mailed Publication 1235, “Advance Earned Income Tax Credit Brochure,” and the Form W-5 to about 13.5 million taxpayers who were potential AEITC candidates during tax year 1993 informing them about the AEITC. AEITC use increased about 1 percent following this effort; however, because other outreach efforts were ongoing during this time, IRS could not conclude that the increase was attributable to the notice or any other effort specifically. In 1997, IRS mailed the notice to about 6 million taxpayers who claimed the EITC in tax year 1996, but did not report receiving AEITC on their federal tax return. With this second mailing, IRS created two groups, a test group of about 60,000 taxpayers that received the notice and a control group of about 60,000 taxpayers with similar characteristics who did not. Results from the IRS report indicated that about 1.27 percent (771 taxpayers) of the tax returns in the test group reported the AEITC compared to 0.51 percent (309 taxpayers) of returns in the control group. The summary report concluded that further efforts to increase AEITC use substantially are unlikely to succeed. Further, the study recommended that notification of EITC recipients about the advance not be repeated. Some of IRS’s administrative changes include outreach to specific groups and changes to publications. For example, after our 1992 report, the White House, the Treasury Department, and IRS conducted extensive EITC and AEITC outreach efforts, including a 1993 announcement of the AEITC by President Clinton. Other outreach efforts included IRS contacts with charitable, social welfare, and minority groups to encourage awareness of the EITC and AEITC among their memberships. IRS also contacted a number of employer organizations to encourage them to publicize the AEITC with their memberships. IRS also made changes to its forms, developed print and video products, and increased speaker seminars to inform the public about the AEITC. For example, IRS developed publicity materials, such as grocery bag and milk carton art, brochures, and posters; provided information in the Small Business Taxpayer Education Program guide; and increased outreach speaker seminar efforts. Presently, IRS continues to conduct outreach about the AEITC as part of its EITC outreach efforts. IRS focuses its outreach to large employer organizations or to specific large employers, which then promote the AEITC to employers or employees. Increasing AEITC use in the future is unlikely for several reasons, but perhaps primarily because of potential recipients’ preferences and high AEITC turnover. Interviews with IRS officials, other experts, and our prior AEITC work suggest that those eligible for the AEITC prefer receiving the EITC in a lump sum after they file their federal tax return instead of receiving relatively small portions spread throughout the year. Another reason is that, despite the reduction in the yearly AEITC maximum to 60 percent of the maximum credit available to a taxpayer with one qualifying child, results from an IRS-funded study using focus groups of EITC participants and interviews we conducted with experts indicated that potential recipients continue to have concerns that they would receive more AEITC than they were ultimately entitled to and that they would owe the difference when filing their federal tax return. In addition, AEITC growth is adversely affected by individuals who elect the AEITC and fail to elect it again, i.e., turnover. As table 3 indicates, more than half of the individuals who elected the AEITC did so for the first time since 1999 in either tax year 2002, 2003, or 2004. With the overall AEITC use remaining relatively constant over the 3 years, this indicates the large number of yearly first-time recipients was almost equally offset by existing recipients forgoing the AEITC in a following year. The percentage of AEITC first-time recipients is much higher than the percentage of first-time EITC recipients, which is slightly less than one- third. Also, of the individuals who elected the AEITC for the first time in 2002 or 2003, about 28 percent elected it again in the following year, 2003 or 2004, respectively, while the remainder did not elect it again in the subsequent year. Conversely, only about 98,000 (9 percent) individuals elected the AEITC consecutively in all 3 years, 2002 through 2004. Overall, as many as 80 percent of all AEITC recipients did not comply with or made errors involving one of the three AEITC requirements that we reviewed, and they received about $282 million when the 3 years, 2002 through 2004, are aggregated. Some taxpayers were noncompliant with more than one requirement. Those requirements are having a valid SSN, filing a federal tax return, and reporting the proper amount of AEITC received on the tax return (see fig. 3). Specifically, in tax year 2002, we found that individuals were noncompliant with at least one of three requirements we reviewed 79 percent of the time and they received about $93 million of AEITC. For tax year 2003 and 2004, individuals were noncompliant with at least one requirement 78 percent (about $91 million) and 79 percent (about $98 million) of the time, respectively. Some of the noncompliance we identified could have resulted from IRS or employer clerical errors or improper reporting by the taxpayer. Therefore, some of the errors may be correctible or were corrected by filing an amended return. IRS cannot readily identify the number of amended returns specifically associated with the AEITC because such returns combine several credits onto one line. The explanation attached to the amended return would provide details on the change and any analysis of the explanation would be a manual process. AEITC recipients are required to provide their employer with a valid SSN for the Form W-2. As table 4 illustrates, about 20 percent (more than 100,000) of AEITC recipients each year may not have been eligible for the advance because they did not have a valid SSN on their Form W-2. Collectively, these individuals received between $37 million and $39 million in AEITC each year. The data are consistent over the 3 years reviewed. For purposes of this report, invalid SSNs include instances where the SSN did not match SSA’s records (i.e., the number was never assigned by SSA) and the SSN/name combinations reported on the Form W-2 did not match SSA records. Some of these individuals were likely eligible for the AEITC. For example, a name/SSN mismatch could include instances when a woman who receives the AEITC marries and changes her name with her employer but not with SSA. This could result in the employer issuing a Form W-2 in the new name, but IRS and SSA only identifying her by the former name. Individuals who file a federal tax return are required to include a valid SSN on their return. An individual who provides an invalid SSN on the tax return is not compliant in meeting AEITC requirements and may also violate the Social Security Act. Such an individual is also required to provide a valid SSN to their employer for income tax withholding purposes and for purposes of certifying eligibility for the AEITC and could be subject to a penalty for failure to do so. Further, if the individual does not file a valid SSN, IRS is unable to assess the recipient’s federal tax liability and SSA cannot credit the recipient for money withheld for Social Security purposes. Also, because taxpayers often do not report receipt of the advance on their tax return, IRS cannot determine whether the taxpayer owes money or deserves a refund. All AEITC recipients are required to file a federal tax return, regardless of the amount of their income, which is generally the primary basis for determining whether a return is required to be filed. Table 5 shows that between 36 and 40 percent, about 200,000 AEITC recipients, did not file a required federal tax return each year. Collectively, these individuals received between $42 million and $50 million of AEITC benefits. About 56,000 to 60,000 of the about 200,000 individuals who did not file the required tax return (about 30 percent) had an invalid SSN on the Form W-2 each year, as shown in table 6. Having a valid SSN is another AEITC requirement, discussed previously, which means these individuals were noncompliant or made an error with at least two AEITC requirements. There are several reasons why a significant number of AEITC recipients might not have filed a federal tax return. For example, depending on their filing status, age, and type of income they receive, recipients may not have had a filing responsibility other than for the AEITC and they may not have remembered or understood they must file a return. In addition, AEITC recipients may not have filed because they were not initially eligible or they became ineligible for the AEITC because of a change in their personal circumstances, and filing would require them to pay back the AEITC they received. When individuals are required to file a federal tax return and do not, IRS cannot readily identify whether the individual was eligible for the advance or whether they owed IRS any of the amounts they received. Conversely, by not filing a federal tax return, some individuals did not receive additional EITC monies that they could only receive had they filed. All AEITC recipients are required to report on their federal tax return the amount of AEITC they received according to the Form(s) W-2. Reporting this amount allows the IRS to determine whether the taxpayer received too much AEITC, and owes money back to the IRS, or whether the taxpayer is entitled to additional amounts of the EITC. Of the approximately 60 percent (about 300,000) AEITC recipients who filed a federal tax return, two-thirds misreported the amount they received in tax year 2002 through 2004, as shown in table 7. Misreported means that the total amount of AEITC reported on the Form W-2 does not match the AEITC amount reported on the federal tax return. Approximately one- third of the federal tax returns correctly matched to the Form(s) W-2 AEITC amount. Of those that misreported, the vast majority did not report receiving any AEITC. Taxpayers may not report the amount of AEITC they received because they either forget or do not know they are required to do so. Underreporting can occur when there is a computation error involving multiple Forms W-2, a taxpayer disagrees with the amount reported on the Forms W-2, or there is willful noncompliance. IRS officials in the AUR program pursue some AEITC cases where taxpayers either underreport or do not report receipt of the AEITC. These procedures are discussed in detail in a following section. Due to the high number of mismatches, many of the AEITC recipients who also claimed the EITC likely received excess benefits. For example, of the 222,691 taxpayers who did not report receipt of the AEITC on their tax return in tax year 2002, almost half went on to claim the EITC. We determined that those taxpayers received nearly $22 million in excess AEITC benefits. For the 3 years, 2002 through 2004, taxpayers received a total of about $64 million in excess AEITC benefits. Additional noncompliance or errors existed with other program requirements, such as receiving excess AEITC. Details and demographic information on the noncompliant individuals are in appendix V. IRS’s Submission Processing is responsible for receiving, processing, and archiving the nation’s federal tax returns, payments, and information returns. In the context of AEITC, Submission Processing attempts to catch mismatches between both paper and electronic tax returns and Forms W-2 (see fig. 4). Between 32 percent and 22 percent of tax returns reporting an AEITC amount were filed on paper between tax years 2002 and 2004, respectively. If a paper tax return has an entry on either the AEITC or EITC lines on the Form 1040, Submission Processing tax examiners are required to ensure that the AEITC amount reported on the return matches the amount in box 9 of the Form(s) W-2, which records the amount of AEITC paid by the employer to the employee. If the amounts on the AEITC line of the tax return and Form W-2(s) match, the tax examiner takes no further action. If the amounts differ (e.g., the tax return reports a lesser amount than is reflected on the Form(s) W-2) the examiner is required to adjust the entry on the return to equal the total AEITC amount from the Form(s) W-2. All paper returns then go to staff who enter data from the Form 1040 into an electronic database. After an examiner makes an adjustment, IRS sends a letter to the taxpayer explaining that an adjustment was made and it was based on the mismatch between the tax return and the Form(s) W-2. IRS sent 282 and 220 such letters in tax years 2003 and 2004, respectively. Submission Processing’s role is to ensure that the return amount is consistent with the Form W-2, not to determine which of the differing numbers accurately reflects the amount of AEITC actually paid to the employee. If the taxpayer disagrees with the adjustment, e.g., believes that the amount on the Form(s) W-2 is incorrect, the taxpayer can dispute it. Between 68 percent and 78 percent of returns reporting an AEITC amount were filed electronically between tax years 2002 and 2004. Submission Processing runs a computer check to find any mismatches between the amounts on the electronic tax returns and the electronic Form(s) W-2. When mismatches are found, Submission Processing rejects the return and sends it to the taxpayer or preparer to correct and retransmit to IRS. In most tax preparation software, once a user enters an amount from the Form(s) W-2, the amount is automatically transferred to the appropriate line of the tax return. Thus, if the user errs in entering the proper amount from the Form W-2, the software would enter this erroneous number on the tax return. As a result, Submission Processing rarely identifies AEITC tax return/Form(s) W-2 mismatches because the original Form(s) W-2 from the employer(s) is rarely included in electronic filings. IRS rejected 172 electronically filed returns reporting AEITC in tax year 2004 and 147 in tax year 2003. Adding these mismatches to the paper return mismatches, Submission Processing found 392 mismatches in tax year 2004 and 429 in tax year 2003. Next, Submission Processing sends the return through its Error Resolution System (ERS) when the AEITC amount on the Form 1040 meets certain selection criteria. If ERS finds that AEITC exceeds this amount, a tax examiner matches the Form 1040 AEITC amount to the Form W-2(s). If they match, no action is taken, and the return is posted in IRS’s Masterfile, the agency’s central repository for taxpayer information. If there is a mismatch, IRS adjusts the return to match the Form(s) W-2, and IRS sends a letter to the taxpayer describing the error and the ERS correction. Taxpayers who disagree with the change can dispute it. The ERS process thus serves as a back-up check in case the earlier physical or electronic processes missed these AEITC mismatches. ERS examined 3,380 tax returns with AEITC in tax year 2004. IRS was not able to tell us how many of these returns involved mismatches. Prior ERS selection criteria would not have identified some returns for individuals when the Form(s) W-2 had AEITC amounts that were above the legal maximums, but below the ERS selection criteria. While interviewing Submission Processing officials, we suggested that the criteria be modified and associated with filing status. IRS made such changes to the criteria, effective January 2, 2007. Submission Processing’s procedures have limited effectiveness in verifying that AEITC recipients have a valid SSN because, as previously noted, many individuals do not file the required tax return and, for those who do, most do not report receipt of the advance. For both paper and electronically filed returns, Submission Processing checks IRS’s Data Master (DM-1) file, which is a database that includes, among other things, all validly issued SSNs and the individual’s name associated with each SSN. Submission Processing rejects electronically filed tax returns with an invalid SSN on the Form 1040, including those from taxpayers who received the AEITC, and sends back the tax return to the taxpayer for correction before processing. For returns filed on paper with an invalid SSN on the Form 1040, Submission Processing processes the return, but disallows certain credits and exemptions, such as the EITC. If the taxpayer’s SSN is invalid and the AEITC is claimed on a paper return, Submission Processing processes any AEITC reported. Since the AEITC is reported as tax, it will either offset all or part of any refund due or, if no refund is due, require the taxpayer to pay back the full AEITC amount. Thus, of the approximately 514,000 individuals who received the AEITC each year between tax years 2002 through 2004, Submission Processing’s procedures would apply to about 118,000 taxpayers—those who filed and reported receipt of the advance. For nonfilers, Submission Processing cannot verify SSNs because there are no tax returns—-the basis of its examination. While Submission Processing performs a SSN verification for those who file and do not report receipt of the AEITC, it is not effective for the advance since the advance is not reported on the return. After a return has been processed, the next point when IRS might identify and correct AEITC noncompliance is when it matches tax returns to other documents it receives. Overall, each year AUR identifies about 14 million discrepancies between taxpayer income and deduction information submitted by third parties and amounts reported on individual income tax returns. AUR has the resources to only work on a fraction of these cases each year and uses criteria, such as revenue collection potential, for case selection. For AEITC, AUR compares the amount of AEITC that employers report on a taxpayer’s Form W-2 to the amount reported on an individual’s tax return (see fig. 5). AUR receives this information in separate databases from SSA and undertakes these comparisons in August and December of each year—well after Submission Processing has completed its review of the returns and associated Forms W-2. Generally, AUR does not take action when the AEITC amounts match the amounts reported on the return or when the discrepancy does not meet IRS case selection criteria. For example, for tax year 2003, our data show that there were about 209,000 tax returns where the AEITC amount did not match the amount on taxpayers’ Forms W-2, primarily because the returns failed to report any AEITC or underreported the amount. AUR identified about 25,000 of those cases in tax year 2003 as potential cases on which to work. In tax year 2003, Submission Processing identified 429 instances where the amount of AEITC reported on the tax return was less than the amount on the Forms W-2, which was much fewer than the about 25,000 mismatches detected by AUR that year. The mismatches involving individuals who filed paper returns and reported receiving the AEITC should have been caught by Submission Processing because its examiners are supposed to match the amount reported on the return with the Form(s) W-2. However, Submission Processing examiners review huge volumes of returns and it can be a challenge to identify the relatively few with AEITC. Because AUR does not break down its AEITC cases by filers who underreport and filers who fail to report any AEITC amount or by filers of paper versus electronic returns, it is not possible to determine exactly how many AUR cases should have been caught by Submission Processing. AUR officials told us that it would be difficult for them to routinely break out this information in this way because this would require new computer programming and the budget for new programming requests was reduced for fiscal year 2007. In accordance with return processing procedures, if Submission Processing identifies an underreporter, it reduces the size of the refund that will be sent to the taxpayer. Thus, using Submission Processing, rather than AUR, better protects revenue because the erroneous EITC amounts are never paid to the taxpayer. If AUR works on the case, it assesses the tax owed, but further IRS action is required to collect the actual overpayment—through a future refund offset or a current effort to collect the refund paid erroneously. Although catching AEITC underreporting in Submission Processing would better protect revenue than do AUR processes, the benefits to improving Submission Processing to catch AEITC errors may already be small and could get smaller. This is because the number of EITC returns filed electronically is more than 68 percent and has been growing and, as previously noted, Submission Processing identifies few mismatches for returns that are filed electronically because the electronic Form W-2 is an iteration of the user’s entries. In its July 2006 report, the Treasury Inspector General for Tax Administration recommended that IRS reemphasize the use of the current AEITC review procedures for paper returns, but agreed with IRS that it could not implement additional procedures for electronic returns because IRS is unable to fully verify the accuracy of the Form(s) W-2 during electronic processing of returns. AUR worked on about twice as many AEITC cases in tax year 2003 as in the previous year. The total amount of tax assessed in these cases in tax year 2003 was more than three times the amount in the previous year, and the amount assessed per case increased about 71 percent, from $555 to $947, which is near the average AUR assessment of about $1,000. These trends were either holding steady or improving for the 84 percent of tax year 2004 cases for which we had data when we completed our review. The number of cases in which the taxpayer fully agreed with the assessment rose from about 29 percent in tax year 2002 to about 41 percent in tax year 2003. The number of cases in which IRS withdrew the assessment, generally after the taxpayer provided documentation that it was erroneous, dropped from about 15 percent in tax year 2002 to about 4 percent in tax year 2003. As previously noted, from tax years 2002 through 2004, about 40 percent of the individuals who had a Form W-2 reporting that they received AEITC did not file a tax return as required by law. All nonfiler cases, including AEITC nonfiler and other lower dollar nonfiler cases, are eligible to be worked on by IRS’s Wage and Investment and Small Business/Self Employed divisions. IRS’s policy dictates that some lower dollar cases and even some cases in which nonfilers are due a refund may be worked on to ensure that all kinds of cases have the possibility of being worked on. Still, a key criterion that IRS uses to determine cases on which to work is potential revenue or the anticipated net balance due. The higher this amount, the more likely the case will be worked on. IRS does not track the total number of nonfiler cases worked on or the kinds of income or credits taken by the nonfilers whose cases were worked on. Because IRS selects cases based on these criteria, IRS officials said they worked on few AEITC nonfiler cases. Beyond AEITC nonfiler cases potentially being worked on by Wage and Investment and the Small Business/Self Employed divisions, these cases are also eligible to be worked on by other IRS programs. For example, the automated collection system involves calls from IRS staff to taxpayers asking them to file a return or explain why they believe filing is not required. IRS’s automated substitute for return program involves collection staff preparing a tax return on behalf of a nonfiler based on third-party and other information that IRS has available. Once a return has been created, IRS can act to collect any taxes due. During 2005 through 2006, IRS conducted a test to determine whether receipt of AEITC should be a criterion to determine nonfiler cases on which to work. The test involved working on cases from tax years 2000 through 2003 for 433 taxpayers drawn from a sample of taxpayers who did not file a return in tax year 2002, but did receive the AEITC that year, and whose income was between $35,000 and $50,000 for that year. The cases were worked on in the same way that IRS works on other nonfiler cases. In interviews with IRS officials about the AEITC nonfiler test, we found that the agency’s test plan lacked documentation and detail, such as test justification, likely costs and benefits, and implementation details. It also lacked a rationale for some important decisions underlying the test and some changes implemented after the test began were also not well documented. In our report on three tests conducted by IRS in 2004 to address leading sources of EITC errors, we noted that the lack of such documentation hindered monitoring and oversight and did not foster a common understanding of the tests among management and staff. One of our recommendations was that the rationale for key decisions on such tests be documented, and the Commissioner of Internal Revenue agreed with that recommendation. Because IRS had not completed its evaluation of the test as of June 2007, the agency has not decided whether AEITC should be a criterion to determine nonfiler cases on which to work. Regardless of the agency’s decision, however, it is unlikely to significantly reduce the number of AEITC nonfilers because any increase in cases worked on would likely represent only a small number of such nonfilers, relative to both the total number of AEITC nonfilers and all nonfiler cases worked on by IRS. All AEITC cases are eligible to be worked on in IRS’s collection program; however, it generally does not work on cases involving AEITC because the amounts involved are below selection criteria that determine which cases to pursue. Collection does not keep track of the specific types of income and credits claimed by the individuals whose cases they handle. Still, collection officials stated that they worked on few, if any, cases involving the AEITC. Instead, these cases go into “deferral status,” which means any refund the taxpayer is due will be reduced until the balance due has been paid off. Like other taxpayers with an outstanding tax liability, these taxpayers would also get a notice stating how much they owe. Collection had an effort under way to work on cases under its selection criteria by setting up automatic monthly installment arrangements and notifying taxpayers that they were expected to begin paying what they owed. Officials said that too many taxpayers ignored the arrangements, and as a result, the program was determined to be cost prohibitive. The program was therefore discontinued. IRS annually audits about 500,000 of the more than 21 million tax returns that claim the EITC. With only about 3 percent of EITC recipients potentially eligible for the advance receiving it, only a small number of the audited returns are likely to involve the AEITC. When IRS audits tax returns claiming the EITC, an examiner is required to determine if the taxpayer was eligible for it, whether the taxpayer took the AEITC and, if so, whether the taxpayer reported the correct amount. If the examiner determines that the taxpayer was not eligible for the EITC, then the AEITC is disallowed. In addition, Criminal Investigation, which investigates potential criminal violations of the tax code and related financial crimes, has identified six cases associated with the AEITC since 2001. Five of the six cases involved refund fraud based on individuals creating fake businesses in order to obtain the AEITC. The other case involved a business owner attempting to evade employment tax by falsely signing up employees for the AEITC, but not including the credit in their paychecks. Because IRS’s enforcement resources cannot fully cover all areas of noncompliance, including AEITC noncompliance, the agency has tried to cost effectively increase voluntary compliance in some areas that involve relatively small amounts of money by mailing taxpayers soft notices. Soft notices are letters that ask taxpayers to comply with a certain requirement in the future or, if the notice informs them that they are not entitled to a benefit that they received, to file an amended return. While IRS officials said there have not been any soft notices specifically targeting AEITC noncompliance, we reviewed the results of three tests that involved taxpayers who were not filing accurately—many of whom would not otherwise be subject to an enforcement action. Additionally, IRS has modified one soft notice test that includes cost estimates. Each of the completed soft notice efforts show some benefits in improving compliance, however, they may be less effective for AEITC recipients. The First Soft Notice Test: The first soft notice—called the “Duplicate TINs” test—involved different taxpayers claiming the same, or a duplicate, TIN for a dependent or qualifying child in order to obtain an exemption, the EITC, or child tax credit benefits. In tax year 2002, IRS identified a total of about 2.4 million taxpayers who used a duplicate TIN. IRS sent soft notices to about 820,000 taxpayers. In November 2005, IRS reported that after receiving the soft notice, 11.4 percent of the population amended their tax year 2002 returns. Other results focused on taxpayers who received the notice for tax year 2002 and whether they repeated the use of a duplicate TIN on their tax years 2003 and 2004 tax returns. The results were as follows: 84.9 percent did not repeat their behavior in either of the 7.7 percent repeated the behavior in 2003, but not again in 2004; 4.0 percent did not repeat the behavior in 2003, but did so in 3.4 percent repeated the behavior for both ensuing years. Although IRS did not report the costs associated with this test, it did estimate the revenue that would have been lost without the soft notices. IRS reported that it protected a total of $218.3 million using the Duplicate TINs test. Due to limitations in the research design, such as not using a control group, IRS reported that it was uncertain whether these results were solely influenced by the receipt of a soft notice or if other factors may have contributed to the change in taxpayer behavior and subsequent revenue protected. IRS no longer considers this a test and continues to send out soft notices for Duplicate TINs issued each year. The Second Soft Notice Test: The second soft notice test—called the “AUR Soft Notice” test—involved filers who underreported small amounts of certain categories of income, such as wages, unemployment insurance, or sales of securities. In December and January 2004 and 2005, respectively, IRS sent 500 soft notices to randomly selected taxpayers who underreported income on their tax year 2003 returns. IRS also randomly selected a control group of 500 taxpayers who underreported small amounts, but did not send notices to this group. An outside consultant that IRS hired to determine the effectiveness of the test reported in October 2005 that (1) soft notices appeared to have a beneficial result in reducing repeat behavior and (2) IRS resources were not overburdened by the notices. Their conclusion was based on several results. First, after receiving the soft notice, 71 out of the 500 taxpayers (14.2 percent) filed an amended return. Second, only 33 taxpayers (6.6 percent) who received the notice repeated the underreporting the following year. In contrast, 174 taxpayers in the control group (34.8 percent) repeated their underreporting. Third, the consultants did not consider IRS resources to be burdened because only 45 of 500 taxpayers (9 percent) called IRS with questions. Similarly, the study found there was limited undeliverable mail—only for 3 taxpayers (0.6 percent). An additional test was conducted for fiscal year 2006 and had similar positive results. IRS is in the process of determining whether it will send out soft notices for AUR in the future. The Third Soft Notice Test: The third soft notice test—called the “Dependent Database” test—involved cases selected for three EITC related issues, including qualifying child, filing status, and Schedule C, “Profit or Loss from Business,” errors. IRS found that 2.4 million taxpayers appeared to have had errors on their tax returns. In November 2005, IRS selected about 12,500 taxpayers to determine the impact of soft notices on taxpayers’ behavior when filing their tax year 2005 return. About another 12,500 taxpayers were selected as a control group not to receive the notice. In its October 2006 report, IRS found that, although there was a difference between the test group of taxpayers who received the soft notice and the control group that did not, the direct relationship between receiving a soft notice and taxpayers’ subsequent filing behavior was weak. Specifically, the report cited that 88 percent of the test group and 86 percent of the control group changed their subsequent tax year filing behavior, including not breaking the same rule, amending the prior year return, or not filing a 2005 return. Specific noteworthy results were: 84 percent of the test group and 83 percent of the control group filed a return in the subsequent year; 46 percent of the test group and 44 percent of the control group broke no rules at all; 26 percent of the test group and 25 percent of the control group broke a different rule; 12 percent of the test group and 14 percent of the control group repeated their behavior the next year by breaking the same rule; and 1 percent of the test group and 0.4 percent of the control group amended their prior-year return. Also in October 2006, IRS modified the Dependent Database test in both the Wage and Investment and Small Business/Self Employed divisions to target notices to another population, i.e., noncustodian person(s) claiming a child. IRS prepared a preliminary cost analysis for this soft notice test based on a sample of 300,000 taxpayers. It estimated the total costs of sending out 300,000 soft notices to be about $533,000, which included $449,000 for the labor to process amended returns and answer telephone calls and $84,000 for mailing. Additional information, including the results of this test, was not available as of mid-June 2007. Although IRS did not develop criteria for these soft notice tests about what would constitute a success, such as a self-correction percentage, an IRS official knowledgeable about the tests said the agency considers the three completed tests a success, despite the few shortcomings. The first and second tests were considered successes because they led to noteworthy changes in taxpayer behavior. The third test was considered a success because, although taxpayer behavior did not change significantly, officials considered it a cost-effective way to have an enforcement presence among these taxpayers. Officials thought that a soft notice test could be beneficial for reducing AEITC noncompliance as well, particularly since the amount of money involved with AEITC is low and the noncompliance might not otherwise be addressed by IRS. Although soft notices may have some potential to address certain AEITC noncompliance, characteristics of the AEITC population might make such notices less effective or more costly than for the test populations for two reasons. First, AEITC turnover is high. In each tax year 2002 through 2004, more than half of the individuals were first time recipients. Moreover, about 73 percent of first-time AEITC recipients in tax years 2002 and 2003 did not elect the AEITC the following year and, thus, would not repeat noncompliance related to the AEITC. Second, almost 40 percent of AEITC recipients do not file a tax return, which means that IRS may not have a current address for those taxpayers. If IRS were to send soft notices to AEITC nonfilers using the last known address, a significant number of individuals may no longer reside there. This means IRS might not be able to locate them or it might spend additional resources trying. More than 100,000 AEITC recipients had invalid SSNs and reported receiving millions of dollars in total benefits for each tax year 2002 through 2004 without any substantial check of their eligibility. Because of the low-dollar amounts involved per taxpayer, IRS worked on only a small number of these cases. IRS does not have an up-front control or procedure in place to require employers to verify that an employee seeking the AEITC has a valid SSN, which could address this noncompliance. Two federal on-line services have the potential to be used to implement such controls. Although the services could be used for this purpose, IRS and SSA officials raised several concerns about implementing such a requirement. The TIN Matching service and SSNVS are federal on-line services that some private organizations may use voluntarily to verify whether federal records show that the name and SSN provided by an individual match. TIN Matching is a pre-return filing service offered by IRS that allows those payers whose income is subject to backup withholding, who submit any of six Form 1099 information returns (e.g., financial institutions), to match the TIN of the 1099 payee against IRS records. It is one of several e- service products offered by IRS. The goal of TIN Matching is to improve the accuracy of Form 1099 data and reduce subsequent inappropriate penalties and error notices. SSNVS is a service offered by SSA that allows registered users (i.e., employers or, in certain instances, their third-party representatives) to verify the names and SSNs of employees against SSA records. The AEITC is outside the scope of SSA’s responsibilities, and SSNVS is a voluntary service that is currently used only to increase the accuracy of wage reporting on Forms W-2. IRS and SSA officials identified a number of challenges that the agencies and employers may face if the TIN Matching service or SSNVS were used to verify AEITC eligibility. Accuracy: Both the TIN Matching service and SSNVS are based on SSA records and have high rates of accuracy in terms of determining whether submitted names and SSNs match. Still, IRS and SSA officials had concerns about whether these rates were high enough for purposes of verifying AEITC eligibility. IRS officials told us that the TIN Matching service is about 98 percent accurate. A December 2006 report by the SSA Office of Inspector General sampled more than 2,000 determinations by SSA’s Numident file—the database upon which SSNVS is based—and found a name and SSN match accuracy rate of more than 99 percent. Still, SSA officials said that if SSA records were used to verify AEITC eligibility, they might want to subject an employee’s name and SSN to more “routines”—procedures such as correcting for transposed numbers that SSA uses to increase the likelihood of a match—than is currently done by SSNVS. IRS officials told us that when the agency was informally considering a proposal to charge a fee for using the TIN Matching service, several IRS officials knowledgeable about the database opposed the idea because they did not believe it was accurate enough that users should have to pay for it. Accuracy concerns, however, do not preclude IRS or SSA from using SSA records to make an initial determination about whether individuals who may claim credits or benefits have demonstrated that they are entitled to them. For example, as previously noted, IRS rejects electronically filed tax returns with a name/SSN mismatch and returns them to the taxpayer for correction. For paper returns reporting AEITC receipt that have a name/SSN mismatch, IRS processes the returns, but since the AEITC is reported as tax, it will either offset all or part of any refund due or, if no refund is due, require the taxpayer to pay back the full AEITC amount. Similarly, SSA instructs its claims representatives that if the identity of a claimant for Social Security benefits or Supplemental Security Income benefits remains questionable because the individual has not provided sufficient proof to establish his or her identity, the claim will be denied even if other factors of eligibility are met. Additional employer responsibilities: IRS and SSA officials said a major concern about employers using either service was whether a name/SSN mismatch would create new responsibilities for employers beyond informing the employee and denying the AEITC. IRS officials also expressed concern that requiring employers to use the services for AEITC would discourage them from promoting the AEITC and perhaps encourage them to dissuade employees from seeking it. Under current procedures for using the TIN Matching service and SSNVS, employers are not required to take any action based on the results they receive. IRS officials expressed concern that a June 2006 regulation proposed by the Department of Homeland Security could expressly list employer receipt of a “no match” letter from SSA as possible evidence that the employer knew or should have known that it was employing an individual not authorized to work in the United States. Under the proposed regulation, if the employer fails to take reasonable steps to resolve the discrepancy after receiving the letter, the Department of Homeland Security may find that the employer had such knowledge and assess civil monetary penalties against the employer. The proposed Department of Homeland Security regulation describes “safe harbor” procedures that the employer can follow in response to the letter. Those steps include the employer promptly checking its records for clerical errors and obtaining required documentation by working with the employee, SSA, and the Department of Homeland Security. If the name/SSN mismatch cannot be resolved, the employer would have to choose between terminating the employee or facing the risk that the Department of Homeland Security may find that the employer knew that the employee was not authorized to work and, by continuing to employ the individual, violated the law. While the proposed Department of Homeland Security regulation covers only no match letters, IRS and SSA officials expressed concern that the regulation could be expanded to include name/SSN mismatches disclosed by the TIN Matching service or SSNVS. Under current law, existing and prospective users who are concerned that matching could be used for purposes beyond improving the accuracy of Form 1099 data or wage reporting can voluntarily cease using, or not start using, the systems. IRS and SSA officials noted, however, that employers would no longer have this choice if they were required to use one of these services to verify an AEITC applicant’s SSN. In addition, existing Department of Homeland Security guidance for employers on the interaction between antidiscrimination laws and legal requirements for verifying employment eligibility states that employers must treat all employees in the same manner. Employers cannot set different employment eligibility verification standards or require that different documents be presented by different groups of employees. If mandatory verification of AEITC applicants’ SSNs created additional employer responsibilities under employment eligibility verification requirements, this result could be inconsistent with efforts to ensure that verification procedures apply to all employees. SSA’s position on SSNVS is that a name/SSN mismatch does not make any statement about an employee’s immigration status and should not be a basis for taking any adverse action against the employee. SSA officials also expressed reservations about SSNVS results being used to terminate employees. Capacity and User Access: IRS and SSA officials said changes to the capacity and user access of the TIN Matching service or SSNVS would either be unnecessary or minor if employers used them to verify SSNs of employees seeking AEITC, although the officials said their agencies might favor creating a new service for this purpose instead. They told us that their systems have the capacity to handle the increased volume of requests that would result from this expanded use. Because both services already are used to verify SSNs, IRS and SSA said any changes to how the services are accessed and used would also probably not be extensive. Both SSNVS and TIN Matching are Web-only services. To use one of the services, the employer designates one or more employees or third-party representatives to register on behalf of the employer. The initial registration for both the TIN Matching service and SSNVS is handled in a similar way and may take as long as 4 weeks: Registrants go to the agency’s Web site and provide information about themselves and their employers on a form, which they send electronically to the agency. The agency sends the employer of the registrant a unique code. The letter directs the employer to provide that code to the registrant. After receiving the code, the registrant can go back to the agency Web site and input the code to activate use of the service. TIN Matching registrants who do not use it for 6 months must reregister, primarily to receive a new password and update any of the information provided during the registration. SSNVS requires registrants to change their password once a year to keep it from expiring, which also requires reregistration. IRS and SSA officials told us that the great majority of users of their respective services generally report that they are not difficult to use for either the registration process or ongoing use. IRS officials said users access TIN Matching voluntarily and that some, particularly from smaller organizations, appear more likely to find it burdensome than users from larger organizations. We found that more than half the employers that provided the AEITC in tax years 2002 through 2004 were small businesses or self- employed and about one-quarter were tax exempt and government entities. In addition, despite IRS’s efforts to outreach to large employers, fewer than one-fifth were large and midsize employers (see table 20 in app. III). IRS officials said TIN Matching service users who found the registration process burdensome were generally those who reported to IRS that they were not use to filling out forms on-line and creating and using passwords. The officials also said that some TIN Matching users reported being uncomfortable having to provide personal information to register. SSA officials said SSNVS is used mostly by larger employers, and a relatively small number of them reported that they found the service burdensome. SSA officials did say, however, that SSA received about 89,000 calls through June 2007 from individuals about registration. And, an SSA official said that small business representatives with whom he has recently spoken expressed frustration with the overall number of tasks that the federal government was already requiring them to perform and, therefore, might be reluctant to verify SSNs. IRS officials also told us that the TIN Matching service is not programmed to track users, which likely would be useful to IRS for enforcement purposes. Still, IRS officials added, that the service could be modified to track employers that used the services for AEITC purposes and report on the results. SSNVS tracks its users to determine whether the service is being properly used. Additional resources: IRS and SSA officials said that if employers were required to begin using their respective services to verify the SSNs of employees seeking the AEITC, the agencies would need additional resources. For example, officials from both agencies cited the need to handle questions from new users, particularly in the first year when all individuals seeking the AEITC would be required to have their names and SSN’s matched. When we told IRS officials that our data showed that about 50,000 employers had at least one AEITC recipient, they said such an increase in the number of registrants could require IRS to increase the number of staff available to answer user questions. But they said they could not estimate how many more staff would be necessary. IRS officials also said they were trying to make the reregistration process easier because they received a substantial number of calls from users who needed to reregister. This would be important for AEITC-related use of TIN Matching because employers with only one or two employees seeking the AEITC would only need to use the service once or twice a year, making it likely that they would have to reregister. SSA officials told us that if the agency was considering or was directed to make its SSN records available for the purpose of verifying AEITC eligibility, SSA would have to devote additional resources to conduct a comprehensive assessment to determine the changes necessary to SSNVS to properly achieve this goal, including possibly creating a different service for assessing AEITC eligibility and buying a new database server to handle the increased volume of users. Again, SSA officials could not estimate how many more staff would be necessary. New federal legislation: Enactment of federal legislation would be needed for employers to begin using the TIN Matching service to verify the SSNs of their employees seeking the AEITC. In 2000, the Department of the Treasury recommended to the Congress that TIN verification be expanded to include other payers subject to an IRS reporting requirement, such as employers who file Forms W-2. It is uncertain whether IRS could require employers to use SSNVS to verify the SSNs of employees seeking the AEITC. SSA officials said they would need to determine whether SSA’s disclosure of SSN data is compatible with the reason it collected the information and, if so, whether verifying SSNs via SSNVS for purposes of AEITC eligibility is consistent with SSA’s legal obligations. Both Treasury and SSA officials said their agencies would strongly prefer enactment of legislation before requiring employer or any verification of SSNs of employees seeking the AEITC. Employee appeal of mismatch: One difference between using the TIN Matching service or SSNVS would occur when an employee claimed that a name and SSN mismatch was inaccurate. IRS and SSA officials said employees who questioned a SSN mismatch would presumably contact IRS, which would send them to SSA to resolve the issue because the TIN Matching service is based on SSA records. IRS officials said that, in contrast, employees questioning a name and SSN mismatch generated by SSNVS would presumably go directly to SSA. Agency mission: SSA officials also said that verifying eligibility for the AEITC is most appropriate for IRS because it is a tax administration issue and is therefore outside the scope of SSA’s mission. However, it is not unusual for agencies to assist other federal agencies in carrying out their mission. SSA officials also said that, regardless of which service was used, IRS would have full administrative responsibility for overseeing a program for employers to verify AEITC applicants’ SSNs. IRS does not require employers to submit a Form W-5 when an employee requests receipt of the AEITC. Several advantages and disadvantages exist if IRS creates a Form W-5 database to use in monitoring AEITC noncompliance issues. IRS could require employers to submit a Form W-5 when an employee requests receipt of the AEITC. In turn, IRS could use the Forms W-5 to create a database to monitor the AEITC. The database could be used to ensure that the SSN provided on the Form W-5 is valid and that it matches the individual’s name. Doing such a check could have prevented more than 100,000 individuals from receiving as much as between $37 million and $39 million each year in AEITC to which they were potentially not entitled because of not meeting the valid SSN requirement. Such a database could also allow IRS to know which individuals received the AEITC and provide the agency with an opportunity to send recipients a notice at the start of the next filing season reminding them to file a federal tax return. A reminder to file notice could likely reduce noncompliance for up to about 200,000 individuals who received between $42 million and $50 million each year in AEITC without filing a federal tax return. Similarly, IRS officials could use a W-5 database to verify other AEITC requirements, such as ensuring that each recipient has only one Form W-5 in effect at a time. This check could reduce the probability that individuals would receive more than the yearly AEITC maximum. While acknowledging that potential advantages exist to developing and maintaining a Form W-5 database, IRS officials said that the disadvantages could outweigh these and any other advantages. Although IRS officials said it was too early in the proposal process to calculate the database’s potential costs and subsequent return on investment, they said it very likely would be substantially lower than the return on investment for either existing or anticipated future noncompliance programs. For example, IRS estimates the current return on investment for EITC Examination noncompliance is between $17 to $1 and $19 to $1. Although these amounts only include labor and do not include overhead such as facilities, equipment, and supplies, officials felt confident that the EITC return on investment would far exceed that for the AEITC. Their opinion was largely based on the few dollars involved with the AEITC, especially compared to other noncompliance programs. In addition, IRS officials expressed concerns that employers would not submit Forms W-5 to IRS. Officials raised an analogy between this proposal and the prior Questionable W-4 program. As we reported in 2003, about 75 percent of the large employers with 1,000 or more employees in IRS’s Large and Medium-Size Business and Small Business/Self Employed divisions who filed tax returns in tax year 2001 did not send IRS any questionable Forms W-4. After our report, IRS discontinued the Questionable W-4 program. Additionally, officials noted that requiring employers to submit Forms W-5 may discourage them from participating because if the employer was notified that the SSN on a Form W-5 was invalid or that it did not match the employee’s name, employers would likely have a responsibility to discuss the matter with the employee, creating yet another new burden employers would not want to accept. Finally, employers may not participate because if the employee left the employer during the year, the employer would again have to contact IRS so the Form W-5 database could be updated. Although we do not know how successful the various options we have identified for improving AEITC compliance may be if implemented or what the full cost of implementation would be, IRS may be able to achieve a return on investment somewhat comparable to that for EITC examinations. We found an average of about $94 million a year in AEITC noncompliance for recipients in tax years 2002 through 2004. If a compliance effort could reduce AEITC noncompliance by one-quarter, that is, $24 million per year, IRS could spend about $1.3 million each year to do so and achieve a $19 to $1 return on investment. Alternatively, IRS estimated that it would cost about $533,000 to send soft notices to 300,000 taxpayers during the Dependent Database test. If IRS were to test sending soft notices for AEITC and it cost IRS about the same amount to send notices to 300,000 noncompliant AEITC recipients, IRS would only need to reduce AEITC noncompliance by about 11 percent (about $10 million) to achieve a $19 to $1 return on investment. If the advance option were discontinued, eligible AEITC recipients could still receive the full benefits of the EITC as a lump sum after filing their tax return. In addition, improper AEITC payments to ineligible or noncompliant individuals would be eliminated. The exact amount of revenue that could be saved is not known. However, in determining an amount, IRS officials said they would consider the following: the amount of AEITC disbursed compared with the amount shown on filed tax returns, the cost of administering the AEITC (e.g., forms and publications, processing, compliance activities), and any amount currently recovered through compliance activities. The AEITC has never achieved a significant participation rate, the amount recipients received in the period we reviewed was quite low, and noncompliance was high. However, policymakers may judge that the goal of providing funds to low-income workers during the year, as opposed to a lump sum that they could get as part of the EITC when filing their taxes, remains important and should continue to be allowed. If so, IRS needs to pursue potentially cost-effective measures to address AEITC noncompliance. Each of the three options we identified for improving AEITC compliance—using soft notices, having up-front verification of AEITC applicants’ SSNs by employers, or requiring employers to submit copies of the Form(s) W-5 and creating a database to monitor AEITC—appears to have potential to improve compliance, but their full benefits and costs need to be evaluated and, if possible, tested. Soft notices have improved compliance in other tax programs but they could be somewhat less effective in improving AEITC compliance, in part, due to its high turnover rate. The TIN Matching service and SSNVS have the potential to reduce AEITC noncompliance by enabling employers to verify workers’ SSNs before providing them the AEITC. Differences exist between the two services, but either could likely be used for AEITC SSN verification. Both services are based on SSA records that are already deemed accurate enough such that SSA and IRS make decisions based on them to disallow certain exemptions and credits until eligibility has been properly demonstrated. Significant concerns exist, however, such as the need for legislation authorizing either the use of TIN Matching or SSNVS for AEITC. This and other issues would need to be further explored as the costs and benefits of employers verifying employees’ SSNs are fully identified. If IRS required employers to submit copies of the Form W-5 when an employee requests the AEITC, IRS could create a database to better monitor and address all three of the noncompliance problems we analyzed. However, imposing additional responsibilities on employers for both the SSN verification option and the Form W-5 database option have the potential to adversely affect the AEITC’s already low participation rate if employers avoid providing the AEITC due to increased responsibilities on their part. Due to the relatively small size of the AEITC overall, combined with the low dollar amounts per taxpayer, IRS officials are concerned that addressing AEITC noncompliance may provide less return on IRS’s enforcement efforts than would addressing other noncompliance issues. However, IRS may be able to achieve returns on AEITC enforcement that would not be significantly out of line with returns on other enforcement work. For example, it cost IRS about $533,000 to send soft notices to 300,000 taxpayers in the Dependent Database test. If IRS were to test sending soft notices for AEITC and it cost IRS about the same amount to send notices to 300,000 noncompliant AEITC recipients, IRS would need to reduce noncompliance by about 11 percent (about $10 million) to achieve a $19 to $1 return on investment. The Acting Commissioner of Internal Revenue should analyze whether any of the following options could cost effectively and significantly reduce AEITC noncompliance: sending potentially noncompliant AEITC recipients soft notices, such as to nonfilers whose Forms W-2 show that they received AEITC and filers who misreported the amount they received or whose SSN and name do not match; requiring employers to verify the SSN of employees seeking AEITC; or requiring employers to submit Form W-5 to IRS and IRS creating and maintaining a database for these forms. To better identify the costs and implementation issues as well as the likelihood for these or other options to reduce AEITC noncompliance, where practical, the Acting Commissioner of Internal Revenue should test these options to make a more fully informed judgment about whether any would be worthwhile. If the Acting Commissioner of Internal Revenue determines that none of these options would be cost effective and no other remedies are viable, then the Treasury Secretary should inform the Congress of this and provide Treasury’s opinion about whether the AEITC should be retained. The Acting Commissioner of Internal Revenue provided written comments in a July 18, 2007 letter. He agreed with our recommendation and outlined the actions IRS would take to address that recommendation, including conducting further analyses and possible testing of proposed options for reducing AEITC noncompliance. He also stated that IRS will conduct its cost-benefit analyses in conjunction with a congressional requirement to study the impact of expanding eligibility of the AEITC to all EITC recipients. We also provided a draft of this report to the Department of the Treasury and SSA and incorporated technical comments where appropriate. SSA emphasized that verifying eligibility for the AEITC is most appropriate for IRS because it is a tax administration issue and therefore outside the scope of SSA’s mission. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its date. At that time, we will send copies of this report to the Secretary of the Treasury, the Commissioner of the Social Security Administration, the Acting Commissioner of Internal Revenue, appropriate Congressional committees, and other interested parties. This report is available at no charge on GAO’s web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 9110 or [email protected]. After analyzing IRS’s responses to our recommendations in our 1992 Advance Earned Income Tax Credit (AEITC) report, we determined that IRS has implemented five of the six recommendations to the Commissioner of Internal Revenue from our 1992 report on the AEITC and partially implemented the remaining one (see table 8). Our first recommendation was for the Commissioner of Internal Revenue to include information on AEITC in employee outreach materials and programs. IRS implemented this recommendation primarily by developing publicity materials (i.e., grocery bags, milk carton art, brochures, posters) and distributing them to the public. For the second recommendation, IRS stated that it did not have the approximately $2 million in funding that the agency said would have been required to notify all taxpayers who receive the EITC, but did not elect the advance option. Instead, IRS took other actions including revising the Form W-2 in 1992 to contain information on how to apply for the AEITC and IRS Notice 797, “Possible Federal Tax Refund Due to the Earned Income Credit,” to include information on how to apply for the AEITC. For our third recommendation, IRS noted that it could encourage employers to make such notifications, but that there are no statutory sanctions on employers who fail to do so. Beginning in 1992, IRS placed text on the face of the Form W-4 instructions advising employees to consider filing a Form W-5 with their employer to obtain the advance through lower withholding. For our fourth recommendation to clarify instructions on the AEITC in “Circular E, Employer’s Tax Guide” IRS did so through the inclusion of new examples explaining to employers how they should make advance payments to employees and how employers can report these amounts. Our fifth recommendation was for the IRS Commissioner to send individuals who received the AEITC and do not file tax returns a notice explaining the requirement to file. IRS partially implemented this recommendation by including information on advance payment in a reminder to file notice and adding a separate AEITC box on Form W-2. IRS did not track the number of AEITC nonfilers who received the notice. The reminder to file notice was only sent until 1997 and IRS officials were uncertain why that notice was discontinued. The last recommendation was for exploring ways to identify those individuals who receive the credit in advance but do not report it. IRS pointed out that its systems were not geared to detecting unreported AEITC payments at the time the returns are processed and the best approach to preventing noncompliance by AEITC payment recipients is a proactive one that recognizes the filing of correct returns. IRS implemented this recommendation by providing a separate line on Form 1040 on which to report AEITC payments and redesigning the Form W-2, for tax year 1993, which it believed would increase the accuracy of the AEITC payment information reported on Form W-2. Our work in this report demonstrates a continuing need to explore additional compliance initiatives aimed at those who receive the AEITC, but do not report it on their tax return. To answer the first and second objectives: how many individuals received the Advance Earned Income Tax Credit (AEITC) compared with the Earned Income Tax Credit (EITC) and how much did they receive in tax years 2002 through 2004; what actions, if any, have been taken to increase use since 1992; and what is the potential for significant increases in the future; and what is the extent of noncompliance, if any, associated with the AEITC; we obtained a data file of all Forms W-2, “Wage and Tax Statement,” for tax years 1999-2004 indicating AEITC payments as shown by an amount greater than $0 in box 9 of the Form W-2 from the Internal Revenue Service (IRS). We used these tax years because they were the most current available at the time we started our review. The Form W-2 identified key information, including the AEITC recipient’s name, address, Social Security number (SSN), and amount of AEITC dollars paid, as well as the employer’s name and address. To determine the number of individuals who received the AEITC, we used Forms W-2 instead of tax returns, which IRS has historically used to estimate AEITC use. We used this alternate approach because we believe the Forms W-2 provide results that are more accurate and complete. For example, using Forms W-2 would include in the population of AEITC recipients those who received AEITC, but did not file a return, and those who filed a return, but did not report any AEITC. Using tax returns would not capture these individuals or related noncompliance issues. In addition, using tax returns counts instances where both spouses receive the AEITC and file jointly on one return as opposed to two individuals. IRS’s Research, Analysis, and Statistics and EITC program office officials agreed with our methodology. We performed data reliability tests on the data file to determine whether the data were sufficiently reliable for our intended purposes. We did this testing, in part, by conducting preliminary analyses, which identified certain data irregularities or anomalies. We identified two noteworthy anomalies in the data file: (1) excessive AEITC dollar amounts and (2) invalid AEITC recipient SSNs. First, many Forms W-2 showed that employees received amounts over the allowable limits. A few even showed individuals each receiving about $1 million in AEITC—amounts clearly above AEITC legal limits and which IRS officials said would be improbable, potentially resulting from transcription errors. Second, we also found some instances where the SSN and/or name on the Form W-2 were invalid, which means that the number was never issued by SSA or that the name and number on the Form W-2 did not match the listed name for that same SSN in official records maintained by IRS. We compared the number and name information on the Form W-2 to the National Account Profile to evaluate the validity of that information and to identify any possible subsequent corrections. To address these data anomalies, we separated the Form W-2 file into four subpopulations using the following three criteria: whether (1) the SSN on the Form W-2 was valid, according to Data Master File (DM-1); (2) the SSN and the recipient’s name on the Form W-2 matched, according to DM-1; and (3) the amount of AEITC received was in excess of the yearly maximum. Each of the subpopulations had a unique profile, as follows:1. Valid subpopulation: This group of Forms W-2 represents all individuals (1) that had a valid SSN, meaning that it was a number issued by SSA, (2) whose name matches the SSN, and (3) that had an AEITC amount within the yearly maximum. More than 75 percent of the Forms W-2 on average during tax years 2002 through 2004 were in this subpopulation. 2. Invalid name subpopulation: This group of Forms W-2 represents all individuals that had (1) a valid SSN (2) a SSN that did not match the individual’s name and (3) the AEITC amount was within the yearly maximum. About 17 percent of the Forms W-2 fell in this subpopulation for each of the 3 years we reviewed. 3. Invalid number subpopulation: This group of Forms W-2 represents all Forms W-2 that had an invalid SSN and an AEITC amount that was within the yearly maximum. About 7 percent of the Forms W-2 during tax years 2002 through 2004 were in this population. 4. Dollar limit subpopulation: This group of Forms W-2 represents all instances where the AEITC amount was above the yearly maximum, regardless of whether the SSN was invalid or if the individual’s name matched the SSN. This represented less than 1 percent on average of all Forms W-2 in each of the 3 years we reviewed. Because IRS officials told us these data were likely erroneous, we excluded it from most of our analyses, and IRS officials agreed. We conducted additional data reliability tests for each of the databases we used to obtain information about the AEITC, including IRS’s Individual Returns Transaction File, for return and filing information, which came from the Compliance Data Warehouse; National Account Profile/DM-1, for IRS’s SSN and name reference information, which also came from the Compliance Data Warehouse; Automated Underreporter (AUR), for IRS’s third-party information return data; and Taxpayer Identification Number (TIN) Matching and SSA’s Social Security Number Verification System (SSNVS), for alternative SSN and name reference information used by employers. After completing our data reliability assessments, we determined the AEITC data to be sufficiently reliable for analysis and our reporting objectives. We also developed a comprehensive analysis plan that included our researchable issues, planned analysis, data sources, and limitations. We shared our plan with IRS and others and incorporated their feedback. Because IRS’s workload precluded them from providing information related to employers/AEITC payers within our time frames, we were able to conduct only limited analyses of employers who paid AEITC. Using the analysis plan for each subpopulation, we conducted multiple analyses to develop relevant demographic, characteristic, and compliance data. Because each population had different criteria, certain characteristics or compliance data could not be developed or compared across the subpopulations. For example, the only characteristics data that could be developed for the invalid SSN subpopulation came from the Form W-2 (e.g., amount of AEITC, geographic location) because it is the only available source. Similarly, tax return data, such as filing status, was not available for those who did not file a tax return. All data pertaining to filed tax returns came either from returns that reported receipt of the AEITC on the appropriate line or from a “constructed tax return,” which IRS officials created using the SSN on the Form W-2 and matching it to an SSN in the primary, secondary, or dependent position on a filed return. The location of the SSN in one of these positions is relevant due to the way IRS manages its data files. There could be instances when a tax return was filed but it was not detected using our methodology. For example, a taxpayer’s SSN on the Form W-2 might have been incorrect and the taxpayer reported the correct number on the tax return (Form 1040). To report on data pertaining to the EITC, we relied on published EITC data provided by IRS research and program office officials, including the EITC database, the EITC Database Year to Year Comparison Report, and EITC Fact Sheet. When possible, our analysis compares individuals who received the AEITC with individuals who received the EITC. IRS defines EITC recipients by the number of federal tax returns that received EITC. In addition, EITC data are based upon the primary TIN of all taxpayers who received an amount of EITC. We determined the number of individuals who receive the AEITC based on the number of Forms W-2 reporting AEITC per unique SSN. IRS officials agreed that even though these populations are not identical, it is reasonable to make a comparison between them. We frequently consulted with IRS officials on the data and our analyses; they generally agreed with both the approach and the accuracy of the results. For the analyses that IRS conducted, we agreed with both the approach and the accuracy of the results. In addition, we reviewed legislative and IRS administrative changes to the AEITC since 1992 and discussed them with IRS and other officials, including Department of the Treasury officials. We reviewed reports on IRS’s implementation of some of our prior recommendations pertaining to the EITC and discussed them with IRS officials, including the National EITC Director. We also coordinated this work with the Treasury Inspector General for Tax Administration. Finally, we also identified and interviewed 11 individuals we determined to be experts to provide us a fuller understanding on the potential for significant increases or improvement to AEITC use and noncompliance and included academics, researchers, practitioners, and individuals representing the areas of tax policy, low-income individual issues, and compliance issues. We chose these individuals based on our knowledge of their areas of expertise and our research that indicated they were knowledgeable about the EITC. To address the third objective, how well do IRS’s procedures address any areas of noncompliance, we examined portions of IRS’s Internal Revenue Manual and interviewed IRS Wage and Investment and Small Business/Self-Employed division officials to determine procedures for processing returns that reported receipt of the AEITC. We examined how IRS’s enforcement procedures and operations, including Submission Processing, AUR, Nonfiler, Collection, Examination, Criminal Investigation, and Taxpayer Advocate, address certain kinds of potential noncompliance. We explored various options to improve AEITC compliance. This involved conducting literature searches and interviews with IRS and SSA officials. We reviewed and discussed the results of soft notice tests with IRS officials, including the National EITC Director, and discussed the applicability of soft notices for addressing AEITC noncompliance. We also reviewed and analyzed documents and reports about IRS, SSA, and Department of Homeland Security databases about whether they could be used by employers to verify the SSN of an employee seeking the AEITC before the employer begins paying it. We also interviewed knowledgeable officials at IRS and SSA about the advantages and disadvantages of such systems when considering the AEITC. Further, we interviewed IRS officials from various offices, such as EITC Program, Modernization and Information Technology Services, and Stakeholders, Partnership, Education and Communication, about the advantages and disadvantages of creating a database for the Forms W-5. It was not within the scope of our work to fully evaluate the potential cost and benefits of these options for reducing noncompliance. We also reviewed prior GAO, IRS, Treasury Inspector General for Tax Administration, and other reports on the AEITC and EITC. We conducted our work primarily in Washington, D.C., and Atlanta, Ga., from December 2005 through July 2007 in accordance with generally accepted government auditing standards. We identified basic demographic characteristics of Advance Earned Income Tax Credit (AEITC) recipients and their employers in tax years 2002 through 2004. Specifically, we identified the following 11 characteristics: (1) number of Forms W-2 received; (2) average amount of AEITC received by consecutive recipients; (3) average adjusted gross income (AGI) for AEITC recipients; (4) average wages; (5) filing status; (6) age; (7) gender; (8) number of qualifying children; (9) filing method; (10) geographic location of AEITC recipients; and (11) employer size, number of employees, and number of Forms W-2 with AEITC issued to employees. Each of the characteristics represents an analysis and provides additional objective information about AEITC recipients not previously discussed. Where possible, we compared AEITC recipients to Earned Income Tax Credit (EITC) recipients. Further analyses may provide information to better target IRS enforcement efforts. For example, IRS’s information on EITC associated with gender differs from our AEITC results in table 17 on gender. The characteristics are organized by a declining AEITC population size. For example, table 10, “Number of Forms W-2 Received by Subpopulation, Tax Years 2002 through 2004,” includes the valid, the invalid name, and invalid number subpopulations, as described in the scope and methodology (see app. II), while table 15, “Filing Status of AEITC and EITC Recipients, Number and Percentage, Tax Years 2002 through 2004,” includes only the valid and invalid name subpopulations. The invalid number subpopulation was not included in the table about filing status because that subpopulation contains only Form W-2 data and not tax return data. Thus, information such as filing status, which comes from the tax return, is not available. Number of Forms W-2 with AEITC that each individual received: Most individuals who received the AEITC only had one Form W-2 reporting its receipt (see table 10). Having more than one Form W-2 does not necessarily indicate noncompliance because an individual may have more than one job during the year and receive the AEITC from more than one employer. The data did not enable us to analyze whether any of these individuals had more than one Form W-5 in effect at one time. Presently, IRS’s administrative procedures do not enable it to identify whether the taxpayer has more than one Form W-5 in effect at one time. Average amount of AEITC received by consecutive recipients: About 98,000 individuals received the AEITC consecutively during tax years 2002 through 2004. These 98,000 individuals received a higher average amount of AEITC than the entire AEITC population (see fig. 6). Adjusted gross income (AGI) for AEITC and EITC recipients: The maximum amount of AGI a taxpayer could have in tax years 2002 through 2004 and receive the AEITC and/or EITC was $34,178, $34,692 and $35,458, respectively. As noted in tables 11, 12, and 13, most individuals who received the AEITC and filed a tax return reported an AGI of $1—$20,000. Some taxpayers had an AGI above the allowable limits. However, because an individual’s personal circumstance may have changed during the year, for example the individual may have gotten a higher paying job, as long as the same amount of AEITC received as shown on the Form W-2 was reported on the tax return, AGI outside the limit for AEITC recipients is permissible and the taxpayer is considered compliant. By reporting the correct amount on the tax return, the AEITC would increase the tax due or reduce any refund. Wages for AEITC and EITC recipients: The yearly wage limits for AEITC and EITC recipients were $34,178 for tax year 2002, $34,692 for tax year 2003, and $35,458 for tax year 2004. The average wages for AEITC recipients in the valid subpopulation were about $18,000, while they were about $47,000 for the invalid name subpopulation. This compares with about $13,000 for EITC recipients (see table 14). Some wages are outside the allowable limits. However, because an individual’s personal circumstance may have changed during the year, for example, the individual may have gotten a higher paying job, as long as the same amount of AEITC received as shown on the Form W-2 was reported on the tax return, wages outside the limit are permissible and the taxpayer is considered compliant. By reporting the correct amount on the tax return, the AEITC would increase the tax due or reduce any refund. Filing status of AEITC and EITC recipients: As shown in table 15, about half of AEITC recipients in the valid subpopulation and most individuals who received the EITC used the Head of Household filing status. This compares to AEITC recipients in the invalid name subpopulation who most frequently used the Married Filing Jointly filing status. About 2 percent of the tax returns that reported receiving AEITC used the Married Filing Separate filing status, which is not allowed. However, an individual’s personal circumstance may have changed during the year, for example the individual may have separated from their spouse. As long as the same amount of AEITC received as shown on the Form W-2 was reported on the tax return, this situation is permissible and the taxpayer is considered compliant. By reporting the correct amount on the tax return, the AEITC would increase the tax due or reduce any refund. Age of AEITC and EITC recipients: Most individuals who received the AEITC, as well as most EITC recipients, were between the ages of 26 and 64 (see table 16). IRS officials noted that recipients whose age fell into the ‘Over 100’ category are the result of a probable error, such as a transcription error. Alternatively, an AEITC recipient in this category may have used a SSN of a deceased individual. Gender of AEITC and EITC recipients: More males than females received the AEITC during tax years 2002 through 2004. In contrast, more females than males received the EITC during this same period (see table 17). Number of qualifying children for AEITC and EITC recipients: About half of the individuals who reported receiving the AEITC had two qualifying children. Results were similar for EITC recipients. However, about 8 percent of this population did not report having any qualifying children, which is not allowed (see table 18). However, an individual’s personal circumstance may have changed during the year, for example, the individual may have separated from their spouse or divorced. As long as the same amount of AEITC received as shown on the Form W-2 was reported on the tax return, the taxpayer is considered compliant. By reporting the correct amount on the tax return, the AEITC would increase the tax due or reduce any refund. In contrast, most individuals who received the EITC also had two qualifying children. There is no qualifying child requirement to receive the EITC. Filing method of AEITC and EITC recipients: As noted in table 19, most tax returns that showed receiving an amount of AEITC were filed electronically, as were most tax returns which reported receipt of the EITC. For both AEITC and EITC, about 70 percent of recipients filed electronically for the 3 years we examined. Geographic location of AEITC recipients in the valid subpopulation: Of the individuals in the valid subpopulation, use of the AEITC varied widely across the country. In all 3 tax years, Florida and Illinois had the most AEITC recipients (see fig. 7). Employer size, number of employers, number of forms W-2 employers issued to AEITC recipients and Total AEITC Dollars Reported on Forms W-2: Slightly more than 50,000 employers reported paying at least one employee AEITC in each tax years 2002 through 2004. Most of these employers were classified by IRS as small business/self employed and they issued more than half of all the Forms W-2 with AEITC (see table 20). IRS has made several administrative changes to the Advance Earned Income Tax Credit (AEITC) since the beginning of 1990 (see table 21). IRS described these changes in responses to recommendations in our 1992 AEITC report and a 2003 Treasury Inspector General for Tax Administration report. There have been two laws enacted since our 1992 report that include specific changes to the AEITC. First, the Omnibus Budget Reconciliation Act of 1993 (OBRA ’93), did the following. 1. Limited the amount of advance payment allowable in a taxable year to 60 percent of the maximum credit available to a taxpayer with one qualifying child. 2. Directed the IRS to notify taxpayers with qualifying children who receive a refund on account of the EITC that the credit may be available on an advance basis. The conference report accompanying OBRA ’93 stated that after these notifications had been made for 2 taxable years, the Treasury Secretary was directed to study their effect on utilization of the advance payment mechanism and, based on the results of the study, the Secretary may recommend modifications to the notification program. Second, the U.S. Troop Readiness, Veteran’s Care, Katrina Recovery, and Iraq Accountability Appropriations Act of 2007, which was enacted in late May 2007, calls for a study of AEITC use. The study is to be conducted by the Secretary of the Treasury for the Congress and is to include the benefits, costs, risks, and barriers to workers and to businesses (with a special emphasis on small businesses) if the AEITC included all recipients of the EITC (i.e., individuals without qualifying children). It also asks what steps would be necessary to implement such an inclusion. We identified additional areas of noncompliance for AEITC recipients during tax years 2002 through 2004. We also examined demographic characteristics of our noncompliant subpopulations, including the invalid name, invalid number, and dollar limit subpopulations. Our analyses revealed the following: (1) some consecutive AEITC recipients had an invalid SSN, (2) most consecutive AEITC recipients filed a tax return, but did not report the correct AEITC amount on the tax return, (3) AEITC recipients with an invalid SSN received little money, (4) most AEITC recipients with an invalid SSN received one to two Forms W-2, (5) AEITC recipients in the invalid subpopulations lived in various geographic locations, and (6) AEITC was paid in excess of yearly maximum limits. As with the data in Appendix III, further analyses of this data may provide information on noncompliance characteristics potentially useful for IRS enforcement efforts. Each of the tables provides additional information about AEITC recipients not previously discussed. Where possible, we compared AEITC recipients to EITC recipients. The characteristics are organized by a declining AEITC population size, similar to the organization in appendix III. Most analyses include the invalid name and invalid number subpopulations. The subpopulations are described in the scope and methodology section (see app. II). Some consecutive AEITC recipients had an invalid SSN: About 98,000 individuals received the AEITC consecutively in each of the 3 years, tax years 2002 through 2004, and received an average of about $56 million. About a quarter of these individuals had an invalid SSN and received an average of approximately $16 million in AEITC. Additionally, nearly 15 percent of consecutive users had an invalid SSN and did not file a federal tax return, receiving an average of about $9 million in AEITC (see table 22). Most consecutive AEITC recipients filed a tax return, but did not report the correct AEITC amount on the tax return: About 98,000 individuals received the AEITC consecutively in each of the 3 tax years, 2002 through 2004. More than half of these individuals filed a tax return. Of those who filed, about half reported the same AEITC amount on the tax return as shown on the Form W-2 (i.e., matched). Of the mismatches, the majority did not report receipt of the AEITC (see table 23). AEITC recipients with an invalid Social Security number (SSN) received little money: Most AEITC recipients who had a Form(s) W-2 with an invalid SSN obtained $100 or less of AEITC (see table 24). These data are consistent with the overall AEITC population, as previously noted, where about half of all recipients received less than $100 and 80 percent received $500 or less for the year. Most AEITC recipients with an invalid SSN received 1 to 2 Forms W-2: Most recipients who had an invalid SSN received 1 to 2 Forms W-2 reporting AEITC. For example, in tax year 2002, 6,223 individuals received two Forms W-2 that reported AEITC. These resulted in a total of 12,466 Forms W-2 equaling $3,261,327 in AEITC, with an average of $262 in AEITC per Form W-2 (see tables 25, 26, and 27). AEITC recipients in the invalid subpopulations lived in various geographic locations: Use of the AEITC varied widely across the country for individuals in the invalid number and invalid name subpopulations. For the invalid name subpopulation, in all 3 tax years, California and Illinois had the most Forms W-2 reporting AEITC and for the invalid number subpopulation, in all 3 tax years, Florida and Illinois had the most (see figs. 8 and 9). AEITC was paid in excess of yearly maximum limits: A total of almost 12,000 Forms W-2, reporting about $64 million, showed AEITC paid above the yearly maximum between tax years 2002 through 2004 (see figs. 10 and 11). Specifically, in tax year 2002 there were 6,408 Forms W-2 above the yearly maximum reporting almost $44 million; 2,690 in tax year 2003, reporting over $7 million; and 2,768 in tax year 2004, reporting almost $13 million. As noted in figure 10, most Forms W-2 above the yearly maximum were between $1 above the limit and $5,000. An individual receiving the AEITC was eligible to obtain a maximum yearly amount of $1,503 in tax year 2002, $1,528 in tax year 2003, and $1,563 in tax year 2004. In addition to those named above, Blake Ainsworth, Frances Cook, James Cook, Rebecca Gambler, Evan Gilman, George Guttman, Donna Miller, Cheryl Peterson, Michael Rose, Steve Sebastian, Daniel Schwimer, Richard Stana, James Ungvarsky, Michael Volpe, and Paul Wright made key contributions to this report.
The Advance Earned Income Tax Credit (AEITC) allows individuals to receive a portion of the Earned Income Tax Credit (EITC) in their paychecks, instead of receiving all of it when filing their year-end tax return. Limited research has been conducted on the AEITC since GAO last examined it in the early 1990s. GAO was asked to determine (1) how many individuals received the AEITC compared with the EITC in tax years 2002 through 2004, what actions, if any, have been taken to increase use, and the potential for increases in use in the future; (2) the extent of noncompliance, if any, associated with the AEITC; and (3) how well the Internal Revenue Service's (IRS) procedures address the areas of noncompliance. To address these questions, GAO analyzed Forms W-2 and tax return data and interviewed IRS and Social Security Administration (SSA) officials. AEITC use was low--only about 3 percent of EITC recipients potentially eligible for the advance received it in tax years 2002 through 2004, or about 514,000 of the 17 million potentially eligible individuals each year. About half of all recipients received $100 or less in AEITC and 75 percent received $500 or less for the year, with a total benefit paid of about $146 million each year. Several efforts have been aimed at increasing use over the last approximately 15 years, such as sending notices to individuals informing them that they were potentially eligible for the AEITC and making changes to IRS forms. Despite these efforts, use did not substantially increase and, for several reasons, it may be difficult to increase it in the future. For example, IRS officials, other experts, and prior GAO work suggests that individuals often do not elect the AEITC because they prefer receiving the entire EITC as a lump sum after filing their tax return. As many as 80 percent of AEITC recipients did not comply with at least one of the program requirements GAO reviewed, and some were noncompliant with more than one during the 3 years we reviewed. In tax years 2002 through 2004, about 20 percent, or more than 100,000 AEITC recipients, may not have been eligible for the AEITC because they had an invalid Social Security number (SSN). These individuals received a total of $37 million to $39 million each year. Almost 40 percent (about 200,000 recipients) did not file the required tax return; these individuals received $42 million to $50 million each year. Of the about 60 percent (more than 300,000) AEITC recipients who did file a return, about two-thirds misreported the amount received. IRS's procedures have limited effectiveness in addressing AEITC noncompliance. For example, Automated Underreporter (AUR) staff worked on only a fraction of AEITC cases because of resource constraints and criteria limiting case selection. IRS could address AEITC noncompliance by sending "soft notices" to recipients, requiring employers to verify employee SSNs before providing the AEITC, or creating a Forms W-5, "EITC Advance Payment Certificate," database. Each of these options have advantages, however, they also have potential disadvantages that could limit their effectiveness.
The World Trade Organization (WTO) was established as a result of the Uruguay Round on January 1, 1995, as the successor to the General Agreement on Tariffs and Trade (GATT). Based in Geneva, Switzerland, the WTO administers agreed-upon rules for international trade, provides a mechanism for settling disputes, and serves as a forum for conducting trade negotiations. There are currently 148 WTO members, up from 90 GATT members when the Uruguay Round was launched in 1986 and from 128 members in 1995. The highest decision-making authority in the WTO is the ministerial conference, which consists of trade ministers from all WTO members and occurs every 2 years. The outcome of ministerial conferences is a ministerial declaration that guides future work. The WTO General Council, which consists of representatives from all WTO members, is empowered to make decisions between ministerial conferences. Decisions in the WTO are made by consensus—or absence of dissent—among all members rather than a simple majority. At the fourth ministerial conference in Doha, Qatar, in November 2001, WTO members reached consensus to launch a comprehensive negotiating round, the Doha Development Agenda or Doha Round. The Doha Round is the ninth round of trade liberalizing negotiations since the trading system’s founding in 1947. These rounds result in legally binding international obligations on members both in terms of the trade barriers they are allowed to maintain, such as tariffs (import taxes), and the trade rules (disciplines) they are to abide by. Failure to comply is subject to binding dispute settlement and possible trade retaliation. In the Doha ministerial declaration, WTO members set a number of overall objectives for the round, such as the need to ensure that developing countries, particularly the least-developed, secure growth of world trade commensurate with their needs for economic development (see fig. 1 for a list of the overall Doha objectives). The declaration sets forth a work program that covers 19 negotiating areas, including agriculture, services, and market access for nonagricultural goods (also known as industrial market access). Within each of those areas, WTO members set specific goals. WTO members also established a Trade Negotiations Committee, chaired by the WTO Director General, to oversee the round’s progress. Because the Doha Round is a package, or “single undertaking” in WTO parlance, simultaneous agreement on all issues is required to finalize an agreement. In negotiating the Doha Round on behalf of the United States, the Office of the United States Trade Representative (USTR) is also guided by certain goals, notably the goals outlined by the Trade Promotion Authority (TPA) granted by Congress in 2002. TPA’s goals for USTR negotiators include overall and principal objectives and promotion of certain priorities. In addition to TPA, USTR has its own goals for the Doha Round outlined in a required official notification to Congress in November 2002. (See fig. 1 for a description of the TPA and USTR goals.) In general, USTR states that it plans to use the Doha Round negotiations to strengthen the multilateral trading system, improve the operation of the WTO, and liberalize international markets. USTR places special emphasis on creating new export opportunities for the United States in agriculture, manufacturing, and services. USTR must explain how any resulting agreement makes progress towards TPA goals when submitting it for consideration for congressional approval under TPA’s expedited approval procedures. TPA is set to expire in mid-2005, but provides a procedure for the President to request a one-time extension of the authority to July 1, 2007. The President recently requested such an extension, which is automatic unless Congress disapproves it by June 30, 2005. The Doha declaration also set several goals for the following ministerial conference. However, at the ministerial conference held in Cancun, Mexico, from September 10-14, 2003, WTO ministers were unable to achieve these goals or to bridge wide, substantive differences on individual negotiating issues. They concluded the unsuccessful conference with WTO members sharply divided along North-South (developed-developing country) lines and agreed only to continue consultations and convene a meeting of the General Council by mid-December 2003 to take steps to move the negotiations forward. As we noted in our January 2004 report, the Doha Round of WTO negotiations had missed virtually all of the established milestones for progress during its first two years. The breakdown at Cancun threatened to derail the talks completely. The December 2003 General Council meeting did not result in any agreements, except to resume talks in early 2004. As a result, WTO negotiators missed the original deadline of January 1, 2005, for concluding a Doha Round agreement. Thus, at the time our last report was issued, in January 2004, the Doha Round’s prospects were uncertain. Despite the Doha Round starting 2004 on an uncertain note, political leadership, intensified dialogue, and a series of conciliatory gestures resulted in adoption by WTO members of a framework agreement on key negotiating issues called “the July framework” or “package.” The framework is credited with putting global trade talks back on track, and participants report that they have finally begun to make progress. Recent high-level meetings have sought to focus and accelerate work that leads up to a December 2005 ministerial conference in Hong Kong. The Hong Kong meeting is now hoped to result in decisions that will help determine how ambitious the Doha Round will be in terms of cuts in subsidies, tariffs, and other barriers. But even if negotiators reach the goal of setting the stage for finalizing a Doha Round agreement in 2006, WTO negotiations are about 2 years behind their original target date. Contrary to post-Cancun gloom, 2004 witnessed a resumption of Doha negotiations. Active leadership by the United States and the European Union (EU) proved essential to progress, as did a more interactive process and hard bargaining. Former U.S. Trade Representative Robert Zoellick is widely credited with taking the initiative to resume talks with a January 2004 letter to fellow trade ministers urging them to keep 2004 from being a lost year for the WTO and suggesting various ways to make the agenda more manageable. He followed up on the letter with extensive foreign travel to meet with other WTO nations and rally support for resuming talks. WTO Director-General Supachi also traveled extensively as part of an active outreach effort to WTO member country officials. WTO members reactivated Doha negotiating groups in February 2004 with new chairs intent on ensuring more fruitful member-to-member discussions. Summing up the status after his visits with foreign officials, Ambassador Zoellick concluded that a breakthrough on agriculture was “absolutely the key” to progress. WTO members undertook intensive efforts to reach a breakthrough on agriculture both in Geneva and at high- level meetings among key nations. Observers credited the EU Trade Commissioner Lamy’s offer in May to eliminate export subsidies with providing a tangible incentive to reach agreement on agriculture. Several conciliatory initiatives were also taken to allay specific developing country concerns. For example, a workshop held in Benin emphasized the importance of cotton reform to growth and poverty reduction in Africa. To alleviate poorer countries’ concerns over adjustment costs that were holding back overall trade liberalization, the EU suggested the WTO’s poorest members in Africa and elsewhere should be offered the “Round for Free”—that is, they would benefit from others’ concessions without having to offer much if anything in return. The offer sparked a debate over this differentiation by making it clear that the EU felt the Doha Round offered, and expected, more of other developing countries. Developing countries also took on leadership roles and actions that contributed to progress. After Cancun, there was skepticism in some quarters as to whether the newly-created coalitions of developing countries would be able to maintain cohesion and play constructive roles. However, according to other participants, throughout 2004, these groups articulated their positions clearly and negotiated effectively with other groups, including the industrialized countries. For example, the group of populous developing countries with agricultural interests known as the Group of 20 (G-20) issued a late May paper setting forth principles to govern tariff cuts to help bridge wide differences in agricultural market access. Malaysia played a key role in shaping the novel terms for trade facilitation negotiations. The WTO negotiating process also became more effective, contributing to progress. In our last report, we noted that the WTO’s large number of members made formal gatherings increasingly ineffective and more suitable for speech-making or restating well-known-positions than for advancing the negotiations. Moreover, members often focused their efforts toward influencing the negotiating group chairmen, rather than other members. In early 2004, a series of mini-ministerials and other smaller, informal group meetings were used to foster direct interaction between members and became the real venues for moving the negotiations forward. Negotiating groups on specific issues also adopted informal meetings that featured more direct member-to-member dialogue rather than the prior chair-driven process. Yet, leadership and process improvements alone were not sufficient to attain agreement. Hard work and willingness to compromise were also required. The wide remaining gaps on agriculture and unrealized demands on other issues were apparent at a late June 2004 meeting of the WTO Trade Negotiations Committee. WTO Director General Supachai Panitchpadki urged members then, and at a ministerial among African nations shortly thereafter in Mauritius, to seize the opportunity before them and show the flexibility required to seal a deal. With the July 16 release of a draft text, 2 weeks of day-and-night negotiating—often in intensive small group settings—were begun. An ad hoc group called the Five Interested Parties (or Group of Five)—composed of five key players in agriculture—was critically important in bridging developed/developing country differences and shaping agreement (even though some members, such as the Group of 10 net agricultural importers, complained about being left out of these deliberations). Finally, on July 31, 2004, WTO members reached a deal on a framework agreement and adopted it formally at a WTO General Council meeting. The main features of the July framework agreement were: establishing key principles for each aspect of global agricultural trade reform, launching negotiations to clarify and improve WTO rules on customs procedures (trade facilitation), identifying the key elements of negotiations to improve industrial (nonagricultural) market access, and stressing the importance of liberalizing access to services markets and addressing outstanding development concerns. It also set a notional December 2005 date for the next WTO ministerial in Hong Kong but did not set a new deadline for concluding the Doha Round. A veteran U.S. negotiator suggested they had pleasantly “surprised themselves” in reaching agreement at the WTO on a long-sought framework. The framework was widely praised by its key architects and many of their stakeholders, though it drew skepticism from some corners. The July 2004 framework is widely credited with putting the Doha Round “back on track” and renewing political commitment to its ultimate success. Up until then, it had proved impossible to make meaningful progress on any of the other 18 issues of the round because key members linked movement on those issues to satisfactory progress on agriculture. Several participants went so far as to suggest that the July 2004 framework meant WTO members had prevented failure in the Doha Round and the WTO from becoming obsolete as a forum for liberalizing trade. A number of officials and experts we met with maintain that the package represents important progress and provided a sound basis for productive technical work on all issues during an anticipated political hiatus in the fall of 2004, when the European Commission changed and the United States held elections. While GAO’s examination does reveal some progress on all fronts either in the July framework or afterwards, participants and experts widely agree that considerable work remains on all issues if the Doha Round is to be concluded successfully as a package deal. Notably, experts agree that translating political commitment into concrete cuts in agricultural subsidies and tariffs involves grueling negotiations over myriad technical details. Without such commitment, loopholes and exemptions could undermine hoped-for liberalization. Moreover, agriculture is recognized as having achieved greater progress than other issues, such as industrial market access and services, which are essential for attaining an acceptable balance of issue interests among the WTO’s 148 members. While cautioning that each issue will advance at its own rate and urging others not to insist on lock-step progress, U.S. negotiators have made it clear they must see evidence of others’ commitment to liberalize barriers to industrial goods and services by the WTO ministerial now officially slated for December 13- 18, 2005, in Hong Kong so that member-to-member negotiations can begin in earnest. Such progress is also vital to attaining U.S. TPA objectives—and realizing U.S. economic gains—for the Doha Round. With tough battles on the details of agriculture reform ahead and the need for progress on other issues, the coming 6 months are crucial. U.S. negotiators are hopeful that groups will concentrate on working through the issues and ensure they are sufficiently advanced to obtain needed decisions by the December 2005 Hong Kong ministerial. If so, and if the Hong Kong ministerial results in the needed decisions, there is at least a reasonable prospect for the talks to conclude by the end of 2006 with meaningful results. Early 2005 high-level meetings have sought to focus negotiations ahead of the December 2005 Hong Kong ministerial. At the late January 2005 mini- ministerial in Davos, Switzerland, and the subsequent mid-February Trade Negotiations Committee meeting, WTO members generally agreed to focus on six issues in Hong Kong. These six issues are: (1) agriculture, (2) industrial or nonagricultural market access (NAMA), (3) services, (4) trade facilitation, (5) “rules” such as subsidies and antidumping, and (6) development. They also generally agreed that the Hong Kong ministerial’s goal is to set the stage for final negotiations in 2006. Although there is not yet agreement about what this entails, U.S. negotiators report that it is widely accepted that by the time of the Hong Kong ministerial WTO negotiators should seek to finalize “modalities” on agriculture and NAMA—that is, numerical targets, formulas, industrial sectors for potential sectoral agreements, and technical guidelines for countries’ commitments on cutting tariffs and subsidies. By the Hong Kong ministerial, negotiators should also have made progress in services, market access, and rules discussions and narrowed the focus, and possibly have begun to outline or draft texts on trade facilitation and development issues. These deliverables will be critical in determining how ambitious the Doha Round will be in terms of cuts in tariffs, subsidies, and other barriers to trade, and what the overall balance will be across various issues. Finalizing modalities is also an important interim step before concrete negotiations can occur among WTO members. WTO members had hoped that by mid- July 2005 they would be able to get a sense of how well their balance of issue interests are being met through such means as producing a “first approximation” of the relevant texts or conducting stocktaking meetings on negotiating progress. However, at a late April 2005 TNC meeting WTO Director-General Supachai expressed concern about meeting these goals, noting that across the board progress has fallen short of what is required. He urged greater unity of purpose and warned that without better progress, WTO members could be facing major problems for Hong Kong. At an early May 2005 meeting of the Organization for Economic Cooperation and Development (OECD) in Paris, ministers called for a heightened sense of urgency in the negotiations and expedited preparations for the Hong Kong conference. After the Paris meeting, trade officials from certain WTO members reached an informal agreement on a technical issue—on the method for converting specific tariffs to ad valorem tariffs—that was considered significant because it had been blocking progress in the agriculture negotiations for months. Even with the July framework and a successful Hong Kong ministerial, slow overall progress and the Cancun setback means the Doha Round now is unlikely to conclude before December 2006, 2 years after the originally established deadline of January 2005. However, past rounds have taken longer than originally planned, and the last two rounds—which involved fewer countries—each took 6 or more years to complete. Experts offer mixed views as to whether this lag is cause for concern. A number of experts we spoke with stressed that the real question is not how long the round is taking, but how ambitious—in terms of liberalization and reform—the Doha Round’s result will be. Some were fairly pessimistic. For example, one USTR and WTO Secretariat veteran termed the progress to date not only pitiful but worrying. Another expert said he did not believe the round was on track for achieving its ambitious liberalization and development objectives and expressed concern because the hardest issues still have not been tackled. As a result, this expert felt that the round would only conclude by December 2006 if work accelerates and political engagement increases. However, other experts said it is too early to give up on the round’s success. One expert stressed that ups and downs—such as build-ups before deadlines and let downs after missing milestones—are typical in trade negotiations. Another expert noted that failures can often be vital to achieving worthwhile agreements and suggested Cancun was such an event. Both he and another expert indicated that there is still time for the Doha Round to conclude with meaningful results in all key negotiating issues. However, they said there is no more time to spare if a balanced, ambitious package is to be attained because even past rounds have required at least a year and a half of very hard bargaining to conclude. That time is upon us, if one works backwards from the July 1, 2007, expiration of any renewed U.S. Trade Promotion Authority. Negotiating progress has varied markedly in the six issues designated as key work areas at the upcoming Hong Kong ministerial—(1) agriculture, (2) trade facilitation, (3) industrial (nonagricultural) market access, (4) services, (5) development issues, and (6) rules. Some advances have been clear in two issues advocated by the United States, agriculture and trade facilitation, although negotiations in the latter have just begun. As detailed in appendixes III and IV, very limited progress has occurred so far in two other issues being advocated by the United States—industrial market access and services. Progress has also been limited on two other issues being advocated by other WTO members—development-related issues and rules. Reform of WTO rules remains an area of controversy, with the United States and other users of the trade remedy laws pitted against many other countries over whether to maintain and even strengthen current rules. As detailed in appendix II, negotiators pressed hard in 2004 to make some progress on all three pillars for agricultural reform: (1) export competition, (2) domestic supports and (3) market access. The centerpiece of WTO member countries’ efforts was the July 2004 framework agreement to remove all export subsidies at a future date. This commitment had long been sought by the United States and other nations, but involved a trade- off: the agreement to negotiate disciplines in other agricultural export competition programs, including U.S. export credit and food aid programs, and state trading enterprises. The framework also set ceilings on certain trade-distorting domestic supports (subsidies), though negotiators will need to further define and set comprehensive reduction schedules for such trade-distorting domestic supports. The framework also establishes the principle that countries with higher trade-distorting domestic supports and tariffs reduce them comparatively more. Market access, the third area of reform, proved the most difficult to negotiate. As further explained in appendix II, the July framework established a principle of tiered and harmonized reductions in tariffs, but did not resolve the differences on how this would be accomplished. Negotiators still need to agree on numerous outstanding details if WTO members are to achieve modalities at the December 2005 Hong Kong ministerial. Technical work on issues including tariff rate quota administration, export credit repayment terms, and converting tariffs into ad valorem equivalents has begun. Yet, the months-long stalemate on the last issue frustrated progress until May 2005. Moreover, according to many experts, the big battles that will determine how ambitious the Doha round will be--- over whether and how trade-distorting domestic support categories will be redefined, setting domestic support and tariff reduction formulas, and defining the sensitive and special products that can be insulated from tariff cuts—remain to be fought. WTO members finally agreed in the July framework to formally launch negotiations on trade facilitation (customs reforms). Trade facilitation, together with three other issues—investment, government procurement, and competition policy—had been under consideration and intense debate by WTO members for the past 7 years (since the Singapore ministerial). Trade facilitation is an issue that the United States is very interested in bringing into the trading system in order to establish the transparent and swift customs procedures that are vital to realizing the benefits of market access concessions. The July framework contained agreement by explicit consensus to begin negotiations on trade facilitation and contained an annex specifying the goals, scope, and other understandings associated with their launch. Notably, WTO members agreed that “the extent and timing of entering into commitments shall be related to the implementation capacities of developing and least-developed (m)embers.…” WTO members also decided to halt work toward negotiations on the remaining three “Singapore issues” of investment, government procurement, and competition policy for the remainder of the Doha Round. Since the July framework, WTO members created a negotiating group and selected a chair. The group has met several times, and various countries, including the United States, have tabled proposals. According to a U.S. trade official, two potentially difficult issues are dispute settlement and technical assistance to help developing countries defray implementation costs. While WTO members did not set specific goals on trade facilitation for Hong Kong, the United States is hopeful that negotiators can make meaningful progress in evaluating proposals. Some experts we spoke with said that progress on this issue is increasingly seen as a “win-win” proposition for developed and developing countries alike. As detailed in appendix III, thus far WTO members have made little progress in negotiations aimed at securing improved industrial market access, a key U.S. objective in the Doha Round. The July framework for industrial market access established an agenda for discussion and, since July, negotiators have addressed some technical issues. However, disagreement persists over the two main methods being considered for liberalization of trade in industrial goods: the tariff reduction formula and sectoral initiatives that would further reduce tariffs in agreed-upon sectors. Such disagreement is reflected in the lack of consensus over the tariff reduction guidance in the July framework. As of late April 2005, disagreement continued over the type of tariff reduction formula to use, the extent of exceptions to the formula that would be available to developing countries, and whether or not sectoral agreements should be included and on what terms. Nevertheless, achieving a meaningful agreement in industrial market access will be essential for the United States. Services liberalization is also a key U.S. objective in which progress is lagging, as discussed further in appendix IV. Initially thought to be a lynchpin of the Doha Round, services talks have taken a back seat relative to other issues. Although several economists and trade experts argue that both developed and developing countries would greatly benefit from services trade liberalization, certain developing countries perceive this goal as a developed country priority. Nevertheless, the inclusion of services in the July framework, on an equal footing with agriculture and industrial market access, represented a victory of sorts and resulted from efforts on the part of both developed and developing country members. Since the July framework, talks on the domestic regulation of services have shown signs of progress. Technical negotiations on market access are also underway but have yet to translate into many new or improved offers in the lead up to May 31, 2005, the deadline set by the July framework. As a result, WTO members and officials remain disappointed with the number and quality of offers. For example, many developing countries have a keen interest in liberalizing the temporary movement of service professionals, but developed countries have so far shown few signs of movement towards more responsive offers. On development, WTO members are grappling with developing country concerns in the areas of special and differential treatment (S&DT) and implementation of their past WTO commitments in light of the July framework’s calls for decisions by July 2005. Conceptual divisions between developed and developing countries, and among developing countries, remain unresolved. They involve such basic issues as whether participating in trade liberalization and abiding by the agreed-upon trade rules is good or bad for development and whether S&DT is an across-the-board right for all developing countries, or an ad hoc privilege available only on a case-by- case basis to meet justified needs, particularly of the WTO’s poorest members. The chair has had only limited success to date in getting members to move to a practical, problem-solving stage. However, as negotiations on agriculture and other market access areas move forward, specific S&DT language is being included. Certain negotiators told us that future progress on S&DT seems increasing likely to come out of technical negotiations within specific negotiating committees, more so than the Committee on Trade and Development, which examines it as a systemic issue. Review and possible reform of WTO “rules” for trade remedies such as antidumping against unfairly priced imports is prominent and controversial in the Doha agenda, though not in the July framework. Other WTO members, notably a coalition of 15 developed and developing nations known as Friends of Antidumping Negotiations, have advanced numerous proposals for extensive reform of existing trade remedy rules. Some of the proposed reforms target U.S. practices that have also been challenged under WTO dispute settlement procedures. In 2004, WTO members participated in an active schedule of meetings to discuss these proposals in depth. Proponents are pushing to intensify negotiations with a view to having rules be a major component of a Hong Kong package. According to U.S. government officials, the United States remains committed to preserving the effectiveness of trade remedies but wants increased transparency abroad. Seven interrelated factors may influence the Doha Round’s progress in resolving substantive differences in the lead-up to the Hong Kong ministerial. First, achieving internal consensus on a balanced package for trade liberalization and successfully negotiating a result that is acceptable to 148 members is an enormously complicated task. Second, formation of coalitions may facilitate consensus building, but developing countries show no signs of taking a less assertive role in pressing their sometimes- competing vision for the WTO’s Doha Development Agenda. Third, U.S. and EU cooperation remains pivotal, but leadership transitions may change relationships. Fourth, analysts agree that action on high-profile WTO dispute settlement cases such as trade remedies and cotton could prove important to ongoing negotiations. Fifth, trade negotiations pursued outside the WTO are widely seen as affecting the Doha Round, though opinion differs on how. Sixth, there are timing considerations, with the mid-2007 expiration of any renewed U.S. Trade Promotion Authority acting as an implicit deadline. Finally, preparation strategy has proved critical to past WTO ministerial success, but there is mixed news on preparations for the Hong Kong ministerial. The complexity of the task itself could make it hard for Doha negotiators to achieve consensus. Several experts and negotiating participants told us that the scope of work remaining is considerable and that the current round is more complex than past rounds because the number of countries actually participating is larger and the issues are, in some sense, unfinished work from prior negotiations. The fact that agriculture had not been addressed for most of the trading system’s first half century was cited frequently as evidence of its thorny nature. The last (Uruguay) round succeeded in the complex challenge of adding agriculture, services, and intellectual property rights to the trading system for the first time. The Doha Round is ambitious because it aims to cut subsidies and trade barriers from the Uruguay Round’s high levels. In industrial goods, the Doha goal of having all members conform to specific methods for liberalizing tariffs on all products differs from past practice of relying primarily on member-to-member bargaining to secure tariff cuts. (Past practice did result in substantial liberalization, but left in place high barriers on some goods and in some countries.) The diversity of economic costs and benefits also makes the task complex. Studies emphasize that both developed and developing countries are positioned to benefit from the Doha Round, but individual countries face varying economic incentives that could affect their willingness to compromise on issues at the Hong Kong ministerial. The Doha talks have been fueled by the premise that international trade can positively benefit a country’s overall growth and development. As discussed more fully in appendix V, a number of expert studies have emerged in response to the negotiations that estimate potential worldwide economic gains exceeding $100 billion under an ambitious liberalization scenario. However, the distribution of economic gains may vary within and between countries, creating perceived winners and losers. For example, several studies estimate economic losses from agricultural liberalization for regions that are large net importers of food, such as North Africa and the Middle East, because the removal of developed country subsidies may increase world food prices. Other experts point out that for countries receiving preferential trade access the estimated economic benefits from worldwide trade liberalization may not reflect export losses from erosion of those preferences. Potential losses in tariff revenue may also be a concern to certain developing countries that heavily rely on trade taxes for government financing. In April 2004, to assist developing countries with potential adjustment costs to trade liberalization, the IMF introduced a new lending program called the Trade Integration Mechanism (TIM). Coalitions of WTO members have been a factor in both leading and preventing movement forward in the Doha negotiations. At Cancun, the large number of participants proved unwieldy and the unexpected emergence of developing country coalitions challenged traditional ways of negotiating. Since then, country coalitions have matured and now advance common priorities of many types. See appendix VI for a depiction of some major groups of countries and their negotiating interests. Developing countries in particular have become more active and influential, according to various participants. A number of ad hoc groups have arisen around other issues. For example, the Colorado Group has led discussion on trade facilitation issues; a variety of “friends” groups have formed to advocate positions in the services negotiations; and the Friends of Antidumping Negotiations group has pressed for changes in the antidumping agreement. This mode of operations has been particularly valuable to developing country members, which sometimes cannot afford to maintain enough staff in Geneva to attend all negotiating sessions that interest them. By reaching an agreement on negotiating proposals within groups, coalitions also help to overcome the difficulty of creating consensus in an organization as large as the WTO. By the same token, they may strengthen opposition to proposals that some members might not otherwise care about. Country coalitions also have other drawbacks, according to several participants— they cannot be relied on exclusively as interlocuters because country interests vary and not every country is included; internal communication is critical, but sometimes breaks down; and coalitions’ efforts to forge common positions may leave little room for negotiating maneuver. Developing countries are not monolithic in their interests, but there is still some evidence that developing and developed countries have competing visions of Doha Development Round’s promise and that satisfying developing country’s expectations may be difficult--factors we identified as challenges in prior reports. While developed countries tend to stress the development benefits projected to accrue from agriculture reform and trade liberalization, developing country coalitions, in various formations, have continued to emphasize the need for special and differential treatment. The largest group of developing countries, the Group of 90 G-90), has advanced specific special and differential treatment proposals, protection against erosion of trade preferences, and trade facilitation approaches that address implementation costs and capacity building issues. However, satisfying these demands--without prejudicing the interests of other developing countries—has proven difficult. In addition, four least-developed African cotton-producing countries successfully lobbied in July 2004 for a special focus on cotton within agricultural negotiations, but have expressed dissatisfaction with progress attained since then and called for decisive action by Hong Kong. Despite more active and positive participation by developing countries, 2004 also demonstrated that leadership and cooperation by the United States and the EU remains essential. A special relationship between U.S. and EU leaders contributed to the Doha ministerial’s success and to the July 2004 package. But the U.S.-EU trade principals have changed since then. Two very important participants in the negotiations, who played pivotal roles in launching the round in 2001 and reviving the Doha negotiations in 2004 after Cancun, U.S. Trade Representative Robert Zoellick, and the EU’s Trade Commissioner, Pascal Lamy, are both out of those offices. The President named a new USTR in mid-March who assumed office on April 29, 2005. In the interim, continued direction by the Acting USTR kept the United States engaged in negotiations. However, the relationship that develops among new U.S.-EU leaders could influence Hong Kong’s success. Their will to lead is also vital. Over the coming months, the United States will face important tests of its trade leadership, such as potentially divisive domestic debates over the Central American Free Trade Agreement (CAFTA), competition from China, TPA renewal, and continued U.S. WTO membership. The EU, meanwhile, has made some statements that suggest it “gave most” in 2004 and thus is expecting others to reciprocate with ambitious offers for services and industrial market access offers. The WTO has been in the midst of selecting a replacement for the position of Director-General (DG). Three WTO committee chairs are personally conducting the vetting process whereby those candidates with the least support from the members are expected to withdraw voluntarily. The last DG selection became so contentious along North-South lines that the job ultimately had to be shared by dividing the DG’s six-year term between two candidates – Mike Moore of New Zealand and the current DG, Supachai Panitchpakdi of Thailand. To avoid a similar situation, WTO members agreed to a selection process and timetable. Mr. Supachai’s term ends on August 31, 2005; by May 31, WTO members aim to select a new Director General who will assume the DG’s position in September, just three months before the Hong Kong ministerial. A smooth transition is necessary to ensure members can concentrate on the difficult negotiations needed to achieve results at Hong Kong. (It appears that a new DG has been selected – France’s Pascal Lamy - and that the process worked well avoiding a contentious north/south divide. Specifically, on May 13, 2005, the General Council Chair informed WTO delegations that Mr. Lamy had received the broadest support from the WTO members and that therefore she would recommend that WTO members appoint Mr. Lamy as the next Director General of the WTO starting September 1, 2005. On May 26, 2005, WTO General Council officially named Mr. Lamy, the next Director General. Welcoming the move, current WTO Director General Supachai pledged to “make every effort to move the Doha Development Agenda negotiations as far as possible to ensure that we are well positioned for our Hong Kong ministerial conference in December.”) WTO disputes often have little day-to-day impact on negotiations, but several ongoing disputes may affect the negotiating atmosphere leading to Hong Kong. In recent months, Brazil won two high-profile cases against the United States and the European Union. Both rulings are expected to influence the Doha agriculture negotiations. In March 2005, the WTO Appellate Body upheld a panel finding against U.S. cotton subsidies, stating that certain types of current U.S. domestic supports result in significant price suppression in world markets. The United States has informed the WTO that it intends to come into compliance and is now consulting with Congress and stakeholders about possible reforms. The European Union has vowed to reform its sugar sector in the wake of an adverse WTO ruling, but is facing challenges to its proposals to reform its banana regime to conform with another adverse ruling. The United States is also facing calls to bring its trade remedy laws and actions into conformity with adverse WTO rulings. With the EU and Canada both imposing millions of dollars in retaliation starting in May because the U.S. has not repealed the Continued Dumping Subsidy Offset Act (also known as the Byrd Amendment), there is a risk of a negative spillover into the Doha negotiations. In part to avoid a similar situation, the United States and the EU have been trying to resolve their dispute over aircraft subsidies. Although WTO members and experts have divergent views on the effects of the numerous free trade negotiations that take place outside of the WTO, they widely agree that the negotiation of preferential trade agreements (PTA) have an impact on multilateral trade talks such as the Doha Round. The Bush administration has actively pursued PTAs as part of its trade liberalization strategy, and more generally, these extra-WTO agreements have flourished worldwide since the mid-1990s. Proponents of PTAs claim that they offer opportunities for achieving deeper and faster liberalization than is possible in the WTO by allowing members to negotiate with subgroups of likeminded countries. Once in place, they argue, PTAs can demonstrate the benefits of freer trade to nonmembers, thereby encouraging greater multilateral liberalization. In contrast, opponents claim that the rising number of PTAs increases the administrative and legal complexity of international trade and adds to the difficulty of building an open, rules-based trading system. After weighing many of the arguments in its report on the future of the WTO, a Consultative Board to the Director General recently stated that there is “real reason to doubt that the pursuit of multiple PTAs will enhance, rather than undermine, the attractiveness of multilateral trade liberalization—at least in the short and medium term.” Among other objections, the Board expressed concern that such agreements are diverting skilled and experienced negotiating resources and reducing enthusiasm for the Doha Round. Timing considerations are also relevant. WTO negotiators are keenly aware that the United States will consider revamping comprehensive farm legislation slated to expire in 2007 and want to make sure it includes WTO- agreed reforms. Moreover, the duration of U.S. Trade Promotion Authority is, in effect, operating as an implicit deadline for concluding the Doha Round, according to numerous participants and experts. If Congress renews TPA in mid-2005, the Doha Round agreement would be eligible for approval under TPA provided it was signed by the President by June 30, 2007. However, the President must fulfill a number of procedural requirements and meet certain time frames established by TPA. Thus, the WTO Doha negotiations would need to conclude by the end of December 2006 to meet TPA’s statutory requirements. If the Doha Round agreement required no changes to trade remedy laws, the effective deadline could change to the end of March 2007. A preparation strategy has proved to be critical to WTO ministerial success (Doha) and failure (Cancun and Seattle) in the past, but there is mixed news on preparations for Hong Kong. Ministerials are important because unlike political summits or annual meetings of other international organizations, actual negotiations occur and decisions are made to enable future work. Indeed, ministerials are the only occasion when trade ministers of all WTO members gather to provide high-level political direction. As noted above, the December 2005 Hong Kong ministerial is pivotal so that final bargaining on cuts in subsidies and tariffs can occur and a Doha package can be finalized by the end of 2006. On the positive side, although Ministers at Hong Kong will face a complex and full agenda, WTO members are trying to narrow differences and clarify options prior to the ministerial. Moreover, there is general agreement on which issues will be discussed and on concrete deliverables desired. In late January and mid-February 2005, WTO members agreed that they would aim to make concrete progress by July on a Hong Kong package. In March, 2005 WTO members agreed on a work plan. On the negative side, April 2005 meetings and our issue-by-issue analysis suggest that wide substantive differences persist and progress in bridging them is lagging, but WTO ministerials have inherent limits and drawbacks in resolving them. First, ministerials can get out of hand if too many unresolved issues are presented or if politically charged issues dominate. Second, the glare of the public spotlight can make compromise difficult. WTO ministerials are large, public events that can involve high-profile confrontations over politically sensitive issues (e.g., labor at Seattle, Trade Related Aspects of Intellectual Property Rights (TRIPS) and Public Health at Doha, cotton at Cancun). The atmosphere surrounding the July 2004 framework was markedly different, in part because WTO negotiators operated outside public view. Third, there has been no change in the process for conducting ministerials, which is, by all accounts, unclear and sometimes chaotic. Past experiences at Cancun and Seattle have shown the risk associated with this situation. Taking into consideration that two of the three last WTO ministerials ended in failure, we have noted some positive developments in the current WTO negotiating environment compared to that just before Cancun. For instance, the July framework represented progress, and since the July 2004 Framework, there has been significant activity and positive engagement by all member countries, including developing countries. Members are very aware of the tight deadlines and work remaining prior to the Hong Kong ministerial. If they are successful in meeting their goals for interim progress, the risk of arriving in Hong Kong with an overly full agenda will be reduced. However, as we pointed out in the report, the ministerial faces a number of potential challenges—and some risk of falling short of its ambitious goals without a greater sense of purpose, according to WTO Director-Supachai’s latest assessment. Furthermore, issue progress requires compromise, but substantive movement toward convergence is still not evident in most areas. Agriculture remains central to the round. Despite some progress, developed country commitments to undertake painful agricultural reform are at least partly contingent on movement on market access. Yet, technical talks on market access are bogged down, and meetings have only recently broken the impasse. Moreover, even with recent proposals, there is scant evidence that key countries are willing to make commitments to liberalize access to their markets for industrial goods and services. But cutting barriers from today’s high levels will be the source of any projected gains from the Doha round to rich and poor countries alike--and deemed vital to achieving balanced results. Deadlines for deciding development issues loom in July 2005, but discussions on outstanding proposals have yet to become fruitful. The United States, meanwhile, is facing tests of its trade leadership at home and calls by other WTO nations for urgent action on cotton, as well as greater receptivity to difficult demands in services and antidumping. With an effective deadline of December 2006, the question is whether the rest of 2005 will see sufficient progress to enable final agreement on a package that offers gains to all WTO members. Some experts remain optimistic that the Doha Round can deliver its promised benefits. Others say tough decisions are necessary for progress and warn time is short given the substantial work remaining. We requested comments on a draft of this report from the U.S. Trade Representative, the Secretary of Agriculture, the Secretary of Commerce, and the Secretary of State, or their designees. The Assistant U.S. Trade Representative for WTO and Multilateral Affairs and other USTR staff indicated general agreement with the report, but provided us with several technical comments, which we incorporated as appropriate. The Department of Agriculture’s Foreign Agriculture Service agreed with our report’s factual findings and analysis, but provided several technical comments, including data on non-ad valorem tariffs, which we incorporated as appropriate. The Department of State’s Director of Multilateral Trade, Bureau of Economic and Business Affairs, indicated agreement with GAO’s findings and analysis, and provided a technical comment, which we incorporated. The Department of Commerce provided written comments, indicating that “GAO analysts have focused on the essential pieces of the negotiating puzzle” and “accurately portrayed the broad state of progress and existing negotiating tensions in the key areas” (see app. IV). In addition, the Deputy Assistant Secretary for Agreements Compliance and other Commerce staff provided us with oral technical comments on the draft, which we incorporated as appropriate. We are sending copies of this report to interested congressional committees, the U.S. Trade Representative, the Secretary of Agriculture, the Secretary of Commerce, and the Secretary of State. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4347. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. The Chairman of the Senate Committee on Finance and the Chairman of the House Committee on Ways and Means asked us to assess (1) overall progress in the WTO Doha Round of negotiations, (2) progress in specific negotiating areas, and (3) factors affecting progress. We followed the same overall methodology to complete all three of our objectives. We obtained, reviewed, and analyzed documents from a variety of sources. From the WTO, we analyzed the 2001 Doha Ministerial Declaration, the Doha Work Programme Draft General Council Decision of 31 July 2004, known as the “July framework,” as well as numerous negotiating proposals from WTO member countries and other documents. From U.S. government agencies and foreign government officials, we obtained background information and documentation regarding negotiating proposals and positions. We also obtained information on day- to-day developments from reputable trade publications. We met with officials from key U.S. government agencies, including the Department of Agriculture, the Department of Commerce, the Office of the U.S. Trade Representative, the State Department, and the Department of the Treasury, to obtain perspectives on progress in the negotiations overall and individual issue areas and factors affecting negotiations. The State Department arranged meetings with various of its country desk officers to provide us with perspectives on key WTO participating member nations. We also met with trade representatives from developed and developing countries located in Washington, D.C., including Australia, Brazil, Canada, Chile, Costa Rica, the European Union, Guyana, Japan, Malaysia, New Zealand, Norway, Singapore, South Korea, and Switzerland. Further, we met with private-sector representatives from specific business sectors, including the American Sugar Alliance, the National Association of Wheat Growers, National Corn Growers Association, the Coalition of Services Industries, the National Association of Manufacturers, and the Zero Tariff Coalition. We met with nongovernmental organizations (NGO), including Oxfam America and the Carnegie Endowment; and trade experts from institutions including the United Nations Conference on Trade and Development (UNCTAD); Georgetown University; the Cato Institute; the Institute for International Economics, the American Enterprise Institute; the World Bank; Columbia University; the University of Toronto; the Manufacturers Alliance; and the Institute for International Business, Economics, and Law, the University of Adelaide, Australia; White and Case; and C&M International. To illustrate tariff profiles for examples of developed and developing countries, we reviewed international tariff and trade data from the World Bank’s World Integrated Trade Solution (WITS) database, which contains member-supplied data from the WTO and the United Nations. Though these organizations are limited in their ability to verify official country data, we concluded that the data is sufficiently reliable for the purposes of our analysis based on accuracy checks regularly performed on the database and its’ wide usage in the negotiations. Prior to the July 2004 mini-ministerial, with the assistance of USTR and the State Department, we traveled to WTO headquarters in Geneva to obtain foreign government official, private sector, and nongovernmental organizational views on progress. We followed this initial visit to Geneva with another trip in late June and early July 2004, to meet with U.S. and WTO officials and observe the Trade Negotiations Committee negotiations; a visit in September 2004, to obtain official reactions to the July framework; and a mid-April 2005 update. The series of visits to Geneva resulted in interviews with WTO member country officials from developed and developing countries including Australia, Brazil, the European Union, India, Jamaica, Japan, and Singapore. We also met with WTO officials, including the agriculture, industrial (nonagricultural) market access, services, development, and trade facilitation negotiating group chairs. In total, we conducted more than 130 interviews with negotiators and trade experts. We performed our work from March 2004 through April 2005 in accordance with generally accepted government auditing standards. Given the importance of agriculture in the Doha Round negotiations, coalitions of countries regrouped in 2004 and focused on making progress on the three pillars in agricultural reform of export subsidies, domestic supports, and market access. The most notable achievement thus far has been agreement in the July 2004 framework to remove all export subsidies at some future date. The framework also set ceilings on certain trade- distorting domestic support categories. However, disagreement persists over how to define such categories and set reduction schedules, as well as how to improve market access through a tariff reduction formula and the definition of sensitive and special products that can be insulated from tariff cuts. The May 9, 2004, EU letter from Pascal Lamy and Franz Fischler to WTO member countries offered to eliminate all export subsides -- with no products excluded – if suitable agreements were reached on market access and domestic support. This offer was warmly welcomed by member countries; for decades, the United States and other countries have advocated completely eliminating export subsidies. Lamy and Fischler conditioned their offer on what they termed “full parallelism,” meaning the commitment to eliminate all export subsidies is linked to establishing new disciplines in other export competition programs, including U.S. export credit and food aid programs, as well as export state trading enterprises. The move reinvigorated negotiations, country officials and we agreed, because the European Union had previously offered only the substantial reduction and elimination of export subsidies for certain products, not total elimination. The EU’s offer, valued at about US $9 billion, meant other countries with substantial export competition programs, such as the United States, would need to agree to undertake disciplines on them. The July framework envisions new disciplines on export credits, food aid, and state trading enterprises. The framework is likely to force the substantial restructuring of U.S. export credit programs, trade officials say, and our analysis supports this conclusion. For example, the July framework language stipulates that export credit programs may not have financing repayment periods of longer than 180 days. The main U.S. export credit programs, General Sales Manager (GSM)-102 and GSM-103, have repayment periods from 6 months to 3 years and up to 10 years, respectively. All food aid programs are subject to scrutiny and could be subject to new disciplines, with certain U.S. programs the focus of international attention, country officials and trade experts told us. The European Union and many African nations advocate that food aid be made only in grant form. They also want to make sure food aid is not a mechanism for surplus disposal when commodity prices are low and commodity stocks are high, because this can trigger commercial displacement. This agreement would have implications for the United States’ Title I P.L. 480 food aid program , which provides for long-term, low interest loans to developing countries for their purchase of U.S. agricultural commodities, and the Section 416b food aid program, which authorizes USDA to donate surplus agricultural commodities overseas. As a result, the U.S. successfully sought changes in a July 16 draft text for the framework agreement, which had called for disciplines to “ensure that food aid is not used as a mechanism for surplus disposal and to prevent commercial displacement” . However, the framework text agreed upon in late July makes no such mention of surplus disposal. Instead, it indicates that there will be future discussions on “providing food aid exclusively in fully grant form.” Finally, the framework calls for disciplines to remove the export subsidy components of state trading enterprises, including the government financing of and underwriting losses of such programs. U.S. goals for the negotiations reflect long-held concerns about the exercising of monopoly power on imports and exports through these institutions. As a result, Canada and Australia are likely to face tighter disciplines on their wheat state trading enterprises, trade officials and experts told us. Many developing and developed countries are seeking substantial reductions in developed country trade-distorting domestic support programs because these programs can reduce world prices and displace otherwise competitive producers from world markets. The European Union and the United States in 2001 together accounted for the majority of global spending in trade-distorting domestic supports. The U.S. has publicly stated it would significantly reduce its trade-distorting domestic support spending if other WTO member nations agree to ambitious outcomes in other areas, such as market access. In July, WTO members agreed that the eventual Doha Round agreement would contain a strong element of harmonization in reductions of trade- distorting domestic support programs by developed countries, with those countries with larger subsidy programs cutting more. This dovetails with U.S. aims in the domestic supports pillar, since the European Union still outspends the United States. The framework sets ceilings on certain kinds of trade-distorting domestic supports and calls for the capping and future reduction of others. The July framework also called for a substantial reduction in the overall level of trade-distorting support from bound levels. To examine how these broad guidelines could affect existing European Union and the United States programs, we have reviewed the various categories of domestic supports, which the WTO classifies into “boxes:” amber, blue, green, and de minimis supports. Figure 3 describes the categories of WTO-recognized domestic support programs. The WTO classifies agricultural domestic support into main categories identified by traffic-light color-coded “boxes” that range from most to least trade-distorting: Red, e.g. spending not permitted in these types of supports; amber, domestic supports that are production- and trade- distorting, the total value of which was capped and then reduced; blue, production-limiting subsidies that have marginal trade-distorting effects; and green, non- or minimally-trade distorting, and thus permitted. De minimis is a category that captures other domestic supports, including market price support measures, direct production subsidies, or input subsidies. There is no requirement to reduce de minimis trade-distorting domestic support for any year in which the aggregate value of the product- specific support does not exceed 5 percent of the total value of production of the agricultural product in question. In addition, non-product specific de minimis support which is less than 5 percent of the value of total agricultural production is also exempt from reduction. Trade-distorting support is comprised of a country’s expenditures in Amber Box, Blue Box, and de minimis supports. In other words, it does not include Green Box measures. For Amber Box supports, the most trade-distorting category, the July framework calls for final bound thresholds to be reduced substantially, using a tiered approach whereby members with more substantial support programs will be placed in higher tiers and forced to cut more. As illustrated in figure 4 below, this provision will narrow the difference between the levels the United States and European Union are authorized to spend versus the amount they actually spend. In absolute terms, the European Union spends substantially more in Amber Box programs than the United States and accounts for more than half of the total amount notified by the 30 WTO members that use such domestic supports. The U.S. is permitted to spend less than one-third of what the EU is permitted. In recent years the European Union spent just over half of what it is permitted to spend on these trade-distorting domestic supports, and its actual spending has declined. By contrast, trade experts and officials told us that other countries are concerned about U.S. domestic subsidy programs due to the United States’ trend of increased spending. The United States has supplied official WTO notifications through 2001 that indicate its Amber Box program spending was within established WTO limits, but its actual spending in Amber Box supports grew from $6.2 billion in 1995 to $15.6 billion in 2001, the most recent year data are available. Furthermore, as we reported in our January 2004 report, the 2002 Farm Bill could increase U.S. agricultural support spending and shift its composition. Specifically, the 2002 Farm Bill created a new category of domestic support programs, dubbed “countercyclical payments,” which are income support payments to farmers when the market price for a covered commodity falls below a legislatively-set target price. As a result, the United States has pushed in the WTO Doha Round for a redefinition of the WTO Blue Box, in which it currently does not spend – so that as long as the Blue Box exists it has greater flexibility to allow access for other programs that are less trade- distorting to count against its current, unused ceiling for this category of domestic supports. The July framework language regarding the Blue Box was favorable to the United States, trade officials and experts told us. It redefines the Blue Box to allow direct payments that do not require production limitations if based on certain criteria. This has met with sharp resistance from the G-20 and other WTO members that seek significant reductions in all forms of trade-distorting domestic supports. These members are concerned that by allowing the United States to place its countercyclical payments in the redefined Blue Box, the United States will not be forced to reduce its trade- distorting domestic support programs and could in fact increase its total sum of trade-distorting domestic support. As recently as March 2005, the G-20 called for further disciplines on price-linked supports in the provisionally redefined Blue Box to allow the compensations for some, but not all, of the difference between market and target prices, among other proposals. The July framework calls for a cap on the Blue Box of 5 percent of the production value, with historical spending patterns to be determined as a base. This could affect the European Union, trade officials and experts told us, which in 2001 spent 23.7 billion euros in Blue Box supports, or 9.6 percent of its total agricultural production. To ensure ambitious cuts in domestic support, the July framework also calls for a substantial reduction in overall trade-distorting support, specifically the sum of Amber Box spending as measured by “Final Bound Total AMS,” Blue Box payments, and de minimis programs—with a 20 percent cut to be made in that total in the first year of implementation. However, the specific extent of reductions was left to future negotiations. Finally, on non- or minimally trade-distorting “Green Box” domestic supports, the framework called for a review, but not a capping or cut of these supports. The G-20 has charged that certain current Green Box direct payments to producers contradict the Green Box criteria of being non- or minimally trade-distorting. The United States and the European Union have resisted caps and cuts, but agreed to examine concerns about abuse. The United States spent about $51 billion in these types of supports, according to its 2001 notification to the WTO, the most recent year that data are available. Market access remains the most difficult pillar of the negotiations, country officials and experts told us. Major agricultural exporters including Canada, Australia, and Brazil want to expand their overseas markets. The United States is the world’s largest exporter of agricultural products, is a highly competitive producer of many products, and has significant offensive interests in this area. The United States has conditioned domestic support cuts on gains in market access. However, many developing countries have resisted liberalization, arguing they do not have the means to subsidize exports or domestic production, and that tariffs are their only source of leverage and protection in the agricultural negotiations. Though the July 2004 framework states that a numerical formula will be used to cut tariffs from current bound rates, countries differ strongly over the type of formula they prefer. The methodology for converting specific tariffs into ad valorem equivalents, upon which the tariff reduction formula would be applied, also frustrated progress in the market access negotiations for months. Such differences are based on the widely divergent tariff profiles among WTO members. Specifically, several studies by the Organization for Economic Cooperation and Development and the World Bank find that for agricultural goods, developed countries tend to have lower average bound and applied tariffs. However, developed countries have a greater percentage of specific (non-ad valorem) tariffs and tariff peaks. The products where developed countries have specific tariffs tend to be those with high levels of protection and the products where they have tariff peaks tend to be those of export interest to developing countries. In contrast, developing countries have uniformly higher bound tariffs, though currently applied tariff rates tend to be far lower than bound tariff rates and specific tariffs are rare. To illustrate these different tariff profiles, table 1 provides agricultural goods weighted average tariff rates for a selected set of countries and products. Due to member differences over the methodology for calculating ad valorem equivalents, the data excludes specific tariffs. In line with these general patterns, the table shows that developed country members such as the United States, the European Union, and Japan have relatively low average bound and applied ad valorem tariff rates that range from around 2 percent to 7 percent. However, by excluding specific tariff rates, the table does not show the full extent to which these countries protect their agricultural sectors. According to the World Bank, the European Union, for example, has specific tariff rates on 44 percent of its agricultural product lines. A 2001 study by the U.S. Department of Agriculture employed a certain methodology for converting ad valorem equivalents and estimated that the non-trade weighted average tariff rate for agricultural goods in the United States, the EU, and Japan was 12 percent, 30 percent, and 58 percent respectively. Additionally, for the example products of dairy, fruits and nuts, and tobacco, the United States, the European Union, and Japan have relatively high tariffs and a large share of international peaks. The United States’ average tariffs in the tobacco sector are extremely high, at around 71 percent. Developing countries such as India, Indonesia, Kenya, and Venezuela have much higher average bound tariffs, ranging from 54 percent in Indonesia to 126 percent in India. However, in each of these cases, there are substantial gaps between the bound and applied tariff rates. The contrast between developed and developing country tariff profiles has fueled a sharp debate on what formula to use to conduct tariff reduction, country officials and trade experts told us. Some developed countries, including the United States, have advocated for a harmonizing formula called a Swiss formula, that would reduce high tariffs by a larger percentage than low tariffs. Developing countries and, particularly net- exporters such as Brazil with high bound tariffs want more flexibility than the Swiss formula would offer. As an alternative, they advocate a banded approach, which divides tariffs into a series of bands and applies an average tariff reduction within each band. The banded approach would apply larger reductions to higher tariff bands—thereby addressing developed country tariff peaks – but would be less harmonizing than a Swiss formula. A formula that combines elements of both Swiss formula reductions and linear reductions is a blended approach. The United States was among a handful of nations that in June 2004 penned and circulated a draft market access white paper attempting to strike a compromise. The paper called for a different type of a tiered formula approach, where within each band a certain percentage of bound tariffs would be cut by a Swiss (harmonizing) formula and a certain percentage of bound tariffs would be cut by a linear percentage. A certain number of tariff lines would be exempt from either a Swiss or linear cut. Instead, liberalization would be handled through tariff rate quota increases. This in effect allows member countries to shield themselves from substantial reduction commitments for certain products by self-designating them as “sensitive products.” Continued disagreement on the tariff reduction formula is significant because variations in the type of formula could result in widely different results. Recent studies indicate that for developed countries, the banded approach reduces applied tariff rates in some instances more than the blended approach. These studies further indicate that the blended approach could have a greater impact in reducing bound rates in developing countries due to the homogeneity of their bound rates at relatively high levels. However, irrespective of the type of tariff reduction formula chosen, the degree of liberalization will strongly be affected by the degree of ambition within the formula, as determined by the coefficients, and by the exceptions to the formula through sensitive product designation. Negotiators had hoped to agree on the formula to cut tariffs in July 2004 but were unable to do so. Instead, they agreed that (1) the future formula will be a single approach for developed and developing countries; (2) the future formula will be tiered, with progressive reductions achieved through deeper cuts in higher tariffs; and (3) all WTO members will have some flexibilities in applying cuts to sensitive products that will be used in the future. Under the framework, increased market access on sensitive products will be achieved through expanded tariff quota rates and tariff reductions. Sensitive products and special products—whereby developing countries are allowed to declare additional products exempt from standard reductions under certain criteria, such as rural development or food security needs—are likely to be among the most contentious battles going forward, trade officials and experts told us. The G-20 and other negotiating groups have stressed that the exceptions for sensitive products – whereby countries are permitted to declare certain key commodities as sensitive and exempt them from standard tariff reduction schedules—is at odds with the liberalizing mission of the Doha Round. Sensitive product exceptions could be used to protect developed country tariff peaks, these countries say, and greatly undermine the ambitious nature of any agreement. Negotiators report that during the period between Cancun and the July framework, members avoided discussing differences in industrial market access (nonagricultural market access or NAMA) so that they could focus on the agricultural negotiations. As a result, while the negotiating atmosphere has improved, the July framework represents a lack of movement on key issues in the industrial market negotiations relative to Cancun. In fact, the framework consists simply of the text that was circulated in Cancun with the addition of a paragraph stating that agreement on substantive elements of the text had not yet been reached. While negotiators are using the framework as an agenda for discussion, the framework lacks both consensus and specificity on the two main methods being considered for liberalization of trade in industrial goods—a tariff reduction formula and sectoral initiatives—as well as the flexibilities that developing countries will be offered in applying these methods. As of the spring 2005, consensus on these substantive issues had not yet been reached. WTO members remain divided over the tariff reduction formula and its application. The July framework suggests a nonlinear formula, to be applied line by line to bound tariff rates, with the aim of reducing or eliminating tariff peaks and tariff escalation. Despite the framework’s disclaimer that agreement on the formula had not been reached, negotiators we spoke with indicated that members have generally accepted the idea of a nonlinear formula. Nonetheless, there remain strong differences over countries’ preferences for the type of nonlinear formula chosen and the formula coefficients. The July framework also suggests a variety of ways in which special and differential treatment could be provided. Negotiators we spoke with suggest that members agree that least developed countries (LDCs), as well as countries with a low percentage of bound tariffs, can be exempted from reducing their tariffs through a formula, but the degree to which other developing countries can exempt products from the formula and qualify for longer implementation periods remains controversial. Country preferences for the formula and application of special and differential treatment provisions continue to reflect those advocated prior to Cancun and are largely based on the varying tariff profiles among WTO members. Similar to conditions in agriculture, tariff profiles for non- agricultural goods suggest that (1) developed countries have bound almost all of their tariffs at relatively low levels, though certain products are characterized by tariff peaks; (2) products where developed countries have high tariffs tend to be among those of export interest to developing countries such as textiles and apparel or leather and footwear; and (3) developing countries, in many but not all cases, have limited tariff bindings and relatively high bound tariffs, though currently applied tariff rates tend to be far lower than bound tariff rates. Developed country members that have relatively low tariffs want significant tariff liberalization in order to access new markets in developing countries that have relatively high tariffs. The United States, for example, is strongly pressing for an industrial market access agreement that would effectively lower tariffs in key developing countries for which an estimated 71 percent of foreign duties on U.S. manufactured exports are assessed, according to the National Association of Manufacturers. To achieve this result, the United States, the EU, and other developed country members, as well as some developing country members that have autonomously liberalized in the past, continue to support a Swiss-type formula – a harmonizing nonlinear formula that would reduce high tariffs by a larger percentage than low tariffs. Such a formula would also address tariff peaks and escalation. To account for special and differential treatment, the United States has proposed that developing countries could apply a different coefficient within the Swiss formula than developed countries, implying more moderate liberalization. The EU and Norway have proposed a “credit-based approach” where the flexibility in the formula coefficient for developing countries would be determined uniquely for each country based on credits for commitment to apply the formula without exception or participation in sector agreements, for example. In contrast, some developing countries emphasize that due to their higher average tariff rates, harmonizing formulas that reduce higher tariffs more than lower tariffs would result in greater percentage cuts for developing countries than developed countries—a result that they argue contradicts the principle of special and differential treatment. As such, they continue to support a Girard type formula—a non linear formula proposed by the former Chair of the industrial market access negotiating group that is based on each country’s average tariff rate and allows countries with higher initial tariffs to reduce those tariffs at a lesser rate than countries with lower initial tariffs. They also support a more extensive application of special and differential treatment exceptions such that developing countries can maintain the flexibility to pursue industrial policies to promote growth of new industries and protect themselves against some of the adjustment costs of ambitious liberalization commitments. Continued disagreement on the tariff reduction formula is significant because variations in the type of nonlinear formula chosen, the formula coefficients, treatment of unbound tariffs, and exceptions to the formula could result in widely different results. For example, both the World Bank and UNCTAD have analyzed the Swiss and Girard non-linear formulas by using hypothetical coefficients and have found that: Swiss formula reductions tend to be larger than Girard formula reductions, particularly for the high tariff rates found in developing countries. While effective at reducing developed country tariff peaks, the Girard formula may also entail greater tariff cuts than the Swiss formula for developing countries that have lower average tariffs resulting from autonomous liberalization. The wide wedge between bound and applied tariff rates in developing countries limits the amount of trade liberalization achieved through any formula. Nonetheless, echoing our analysis of market access negotiations in agriculture, the actual degree of liberalization that is achieved through these formulas or any other formula will strongly be affected by the degree of ambition within the formula, as determined by the coefficients, and by exceptions to the standard tariff reduction schedules that will be offered through special and differential treatment. WTO members also remain divided over sectoral initiatives. The July framework states that sectoral agreements should supplement the tariff reduction formula with an aim to eliminate or harmonize tariffs in key sectors of interest to developing countries. The United States and other members have proposed the notion that participation should be based on a principle of “critical mass,” meaning that countries that account for the majority of trade in a sector should participate such that mutual gains are obtained without problems of free-ridership from nonparticipants. However, we were told that key developed and developing country members disagree strongly over whether sector agreements should be included in an industrial market access agreement. The United States has specific objectives for industrial market access as set out by its Trade Promotion Authority legislation: to focus on improving market access for U.S. exports and to increase global participation in sectoral agreements that reduce or eliminate barriers in key sectors, such as textiles and apparels and civil aircraft. Developed country members such as the United States, New Zealand, and Japan strongly support the inclusion of sector agreements because they can result in greater liberalization outcomes than even ambitious formula cuts. Specifically, they argue that only cuts that bring bound rates below currently applied rates would actually liberalize trade. Such members have conducted education and outreach with developing countries regarding potential requirements and flexibilities for sectoral agreements, as well as the likely economic benefits they could receive from ambitious trade liberalization. Nonetheless, certain developing countries, such as Brazil, do not support this method of liberalization and remain concerned about potentially mandatory participation. They argue that sector agreements could create an overly ambitious pace of reform. Accounting for 78 percent of private sector GDP and 80 percent private sector employment in the United States, services constitute a core priority for U.S. negotiators. Initially thought to be a lynchpin of the Doha Round, services talks have taken a backseat position relative to other issues. Nevertheless, the inclusion of services in the July 2004 text on an equal footing with the key market access pillars of agriculture and industrial market access resulted from efforts by both developed and developing country members and industry coalitions. Although some developing countries are reticent about services negotiations, generally perceiving them as a developed country interest, many developing countries have a particular interest in obtaining commitments under mode 4, which governs the temporary movement of service-delivery professionals. Notwithstanding these points of contention, since July, talks on the domestic regulation of services have shown signs of progress, as have technical negotiations on market access. However, these have yet to translate into improved offers. An opportunity for significant services liberalization could be foregone if negotiations do not intensify. Services negotiations aim to reduce barriers to international trade by improving the General Agreement on Trade in Services (GATS) which (1) ensures the increased transparency and predictability of international trade rules and domestic regulations governing services industries (rulemaking); and (2) promotes progressive liberalization of services markets through bilateral negotiations (market access). The Doha Declaration states that members shall submit initial services offers by March 31, 2003, a deadline that many members missed. Following the Cancun ministerial, and in the run up to the July framework, services negotiations made slow progress. Rule-making talks were stalled, and although the 2003 deadline had long past, pending market access offers outnumbered those submitted. Observers said there lacked a “critical mass” of offers for market access talks to make substantial progress. Those offers that were tabled were characterized as being of poor quali1ty. Movement had become contingent upon advances in other areas, particularly agriculture. Nevertheless, the final version of the July framework placed services on an equal footing with agriculture, industrial market access, and the other areas considered essential to a final Doha Round package. Initially, services were absent from the text. However, a specific section and annex on this sector were added after several developed and developing countries, as well as industry coalitions from the U.S., the EU, Australia, India, Hong Kong, China, Japan, Brazil, and Canada argued for their inclusion. Specifically, the July framework reasserts the importance of achieving services liberalization and urges members to intensify their efforts to conclude the negotiations on rulemaking. With a view to providing market access to all members, the text calls upon members to submit high-quality offers, particularly in the sectors and modes of supply of export interest to developing countries. It specifically names mode 4 as being among these, and sets May 2005 as the deadline for members to table new offers. After they agreed to the July framework, members held several multilateral and bilateral meetings and discussed rules, domestic regulation, and market access with renewed momentum. Technical talks were ongoing on all fronts. On the rule-making side, members initiated new discussions on emergency safeguards, subsidies and government procurement, but none of these issues came close to resolution. Certain East Asian developing countries continued to advocate creation of an emergency safeguard mechanism for services, reflecting concerns over their experience with the 1997 financial crisis. However, many WTO members reportedly see an emergency safeguard for services as being technically unfeasible and/or, in the case of the United States and most other developed countries, undesirable. Discussions on domestic regulation were more promising. Several proposals triggered constructive debates on regulatory disciplines and transparency. On the market access side, talks were said to be progressing on a technical level. After July 2004, a few more developing countries tabled initial offers, bringing their number up to 52, and bilateral talks seemed to have regained momentum. One gauge of movement was embodied by the intensification of informal meetings held by so-called Friends groups, which assemble subsets of member countries around issue-specific concerns such as financial services, energy services or mode 4. However, WTO officials said that sufficiently detailed negotiations on specific services sectors had still yet to begin. Moreover, a general concern with the current offers is that they do not fully bind, let alone deepen, the level of liberalization that members have already, de facto, achieved outside of the WTO. Another potential problem is that these offers do not systematically schedule commitments in every service sector. Some have signaled notable absences and weaknesses in financial, insurance, communication, audio-visual, and professional services—sectors of interest to the United States—but also maritime services and others of interest to different members. In response, a number of countries are pushing for the universal adoption of minimum requirements, or “benchmarks,” in given industries such as financial services. Approximately 40 developing countries have not submitted initial services offers at all—not counting LDCs. According to one WTO official, their failure to table services offers does not strictly reflect a lack of means, though in some cases it may. WTO officials felt that the outcome of intensified bilateral and informal talks would only become clear after May 2005, the deadline for tabling new and revised services offers. Forward movement in the months leading up to the Hong Kong ministerial and beyond will depend on members overcoming four challenges. First, several officials we spoke with stated that insufficient technical capacity could prevent a number of developing countries from tabling initial or revised services offers before the Hong Kong ministerial. Second, resolving the contentious, mainly North-South disagreement over the extent of liberalization under mode 4 may be crucial to achieving progress in market access. The temporary movement of service-delivering professionals is a politically sensitive issue for many developed countries, and their offers under mode 4 are generally unsatisfactory to most developing countries with ambitions in this area, such as India. Despite their demands, the U.S. government has clearly expressed its reticence to grant other members more extensive market access under mode 4 than is reflected in its existing commitments. According to U.S. negotiators, certain commitments under this mode could involve modifying domestic immigration law, and certain countries are simply not prepared to make this move within the WTO framework. Given the priority placed on obtaining mode 4 concessions, this discord may become increasingly problematic. A third factor that could affect progress in services negotiations is the question of balance. Malaysia, Thailand, and the Philippines allegedly want a concession on emergency safeguards before fully engaging in market access bargaining. Brazil has tied its willingness to press forward in services talks to obtaining satisfaction in agriculture. Continuing to tie progress in services talks relative to other areas could be problematic, as the request-offer approach to negotiating services liberalization is inherently and comparatively slow. Moreover, the greater complexity of identifying and dismantling often opaque barriers to trade in services slows the speed of services talks. The head of the WTO Secretariat’s services division thinks that members will take 18 months to reach a meaningful agreement once they start negotiating on a more detailed level than they are currently. Finally, there is wide agreement that negotiators need to summon more political and technical resources from their capitals to conclude a meaningful services agreement. More than in other areas of trade, barriers to trade in services often occur behind borders, such that dismantling these measures requires involvement on the part of national ministries, subfederal level regulators, and various authorities not normally involved in trade policy. This poses a problem for many small developing countries. The Doha trade negotiations aim to increase international trade in order to improve member countries’ economic growth and development. Economists have used trade models to generate numerous studies that estimate potential economic gains from trade liberalization for developed and developing countries alike. These estimates vary significantly, depending upon the extent of trade liberalization assumed and other key characteristics of the models. Several studies find that estimated worldwide economic gains accrue to both developed and developing countries. However, the distribution of economic gains may vary within and between countries, creating perceived winners and losers. As such, the individual economic incentives that countries face may differ, thereby affecting each country’s negotiating goals within Doha. A primary rationale driving the Doha liberalization agenda is the belief that international trade can positively benefit a country’s overall growth and development. Potential benefits occur as international trade increases competition and specialization, provides greater access to technology, and expands export markets. Over time, a more liberal trading regime may reduce costs on both imported manufacturing inputs and exported final products that create incentives for foreign producers to invest in new production – benefits typically referred to as dynamic gains from trade. While the role of international trade in fostering growth and development has become more widely accepted, economists have also argued that trade liberalization can involve significant adjustment costs. Adjustment costs may include unemployment in sectors that are not internationally competitive or costs of fiscal reform as governments heavily dependent on trade taxes shift toward income or production taxes. Additionally, international trade may yield an uneven distribution of economic gains, creating temporary winners and losers between countries as well as within them. As each country participates in the Doha negotiations, it is working to achieve a balanced package of commitments that will be politically acceptable to its various domestic constituencies. Nevertheless, without considering distributional issues, several studies predict that both developed and developing countries stand to benefit economically from multilateral liberalization. Developed countries are positioned to receive gains from trade liberalization since they are large traders and currently face relatively high tariffs for exports into developing countries, particularly for industrialized goods. Developing countries stand to receive gains from trade liberalization due to the fact that developed countries often have pockets of high average tariffs on products that developing countries tend to export. High developed country tariffs tend to apply to agricultural and processed agricultural goods as well as to light manufactures such as textiles and clothing. When weighted by the amount of trade occurring under them, these tariffs translate into significant trade barriers for developing countries. Developing countries also stand to gain significantly from liberalization by other developing countries. The share of developing countries’ agricultural exports going to other developing countries rose from 28 percent in the 1980s to 37 percent in 2001. However, in many cases, barriers imposed by developing countries on goods from other developing countries are even higher than those they face from developed countries, impeding potential South-South trade between developing countries. Economists often estimate the benefits and costs of easing trade restrictions by examining a recent period and estimating how trade and economic welfare would have been different under a scenario where certain trade restrictions were eased. Concurrent with the WTO and other trade negotiations, numerous trade models have been used to simulate liberalization of trade policies and calculate the likely range of effects on variables such as exports and imports, tariff revenues, production, prices, and income. Many of these studies use a computable general equilibrium (CGE) model called the Global Trade Analysis Project (GTAP) model. GTAP is a global general equilibrium model that describes the relationship between all sectors within an economy and all economies worldwide. In its general form, GTAP is a static model, which means that it simulates how economies will respond only to the trade policy change being examined. Results generated from GTAP should be interpreted as order-of-magnitude results rather than single point best estimates because the assumptions regarding how responsive economic variables are to policy changes drive the results. Extensions of GTAP and other CGE models have been made to take into account how economies will grow over time. These dynamic versions of GTAP may include information on growth rates of capital, investment, and productivity. Additionally, while the general form of GTAP includes an assumption of perfect competition and constant returns to scale, extensions of GTAP have incorporated characteristics readily observed in manufacturing, such as imperfect competition and increasing returns to scale. In these cases, trade liberalization can lead to greater specialization and increased economic gains over time. However, information on how firms respond to market changes in the long run is inherently more difficult to measure with certainty and, as such, results yielded from these models should be viewed with this limitation in mind. Table 2 provides a listing of various estimates of the economic gains from trade liberalization under selected trade liberalization scenarios. The table is not comprehensive but is intended to illustrate the wide range of results estimated through trade models – economic benefits ranging from $22 billion to $574 billion worldwide. Results vary depending upon the type of model (static vs. dynamic), key assumptions in the model (perfect competition or imperfect competition), and the ambition of the liberalization scenario. For example, the level of tariff cuts and sectors included for liberalization determine the ambition of the liberalization scenario and are one important factor accounting for variation in the results in table 2. Anderson et al. estimate gains of $254 billion with a full removal of tariffs on agricultural and industrial goods, while Cernat estimates gains of $40 billion with a 50 percent reduction. An OECD model on liberalization in agriculture, manufacturing, and services shows that as tariff reductions are increased from a 50 percent linear tariff cut to a more ambitious Swiss formula tariff cut, to a 100 percent tariff cut, economic gains rise from $117 billion to $159 billion to $174 billion, respectively. Several studies suggest that liberalization of agriculture will provide significant benefits to developing countries, despite the small size of agriculture in global output. Models by Anderson et al. and the World Bank estimate that roughly two-thirds of global economic gains from the liberalization of agricultural and industrial goods come from agricultural liberalization. The study by Brown et al., however, estimates that the largest economic benefits, $414 billion, come from liberalization of services and that there is an actual global net loss of income from agricultural liberalization of $3 billion. Several studies in table 2 also find that the distribution of economic benefits between developed and developing countries may be relatively even (ranging from 40 percent to 60 percent for each). Such benefits as a share of GDP, however, would be much larger for developing countries. For example, according to estimates by the World Bank, liberalization of both agricultural and industrial tariffs would provide $385 billion in economic benefits that would be equally divided between developed and developing countries. However, relative to their income levels, developing countries would gain 1.5 percent of GDP compared to 0.5 percent of GDP for developed countries. Several studies emphasize that the majority of gains for developed countries derive from lowered tariffs by other developed countries – a finding that is true for developing countries as well. In addition to caveats previously discussed, three limitations of trade models should be acknowledged: Difficulty in measuring current levels of protection. Many trade model estimates are based on analysis of current levels of trade protection that are difficult to measure due to the presence of nontariff barriers, non-ad valorem tariffs, and gaps between bound tariffs and applied tariffs. In some cases, data on tariffs may not be current enough to include information relating to preferential tariff rates or country accessions to the WTO. As a result, economic benefit estimates yielded from these data may be overstated because they account for tariff reductions that have already taken place. Economic benefit estimates may also be overstated if the analysis is focused on reductions in bound tariffs rather than reductions in applied tariffs – wrongly assuming that any reduction in the bound rate would translate into an equal reduction in the applied rate. Costs of adjustment. Many trade model estimates do not take into account adjustment costs to trade liberalization, such as a rise in unemployment or consumer prices during a transition period to the new trade policies. The more ambitious the liberalization scenario, the greater the long-term economic gain—as well as the short-term economic costs—are likely to be. Development institutions such as the World Bank, IMF, and United Nations have placed recognition on these costs, though there is presently limited understanding of the extent of such costs. Structural features of some economies. Many trade model estimates use general assumptions regarding industry characteristics, which may not account for positive effects due to industrial policies. Some economists have noted that under certain conditions there are potential benefits in using tariffs to support growth in new industries. While many studies estimate that trade liberalization is likely to result in economic benefits worldwide, there is likely to be differentiation in economic gains between and within individual countries. In the short run when adjustment costs are present, liberalization is likely to create winners and losers. For example: Net food exporters vs. net food importers. Regions that are significant agricultural exporters are expected to gain significantly from the agricultural liberalization measures being negotiated in Doha. However, the estimated gains are smaller and sometimes negative in regions that are large net importers of food because the potential removal of developed country subsidies may increase world food prices. The IMF estimates that major net exporters of food in Latin America and sub- Saharan Africa could gain between 0.3 percent and 0.6 percent of GDP from agricultural liberalization, while major net food importers in North Africa and the Middle East could lose 0.3 percent of GDP. Other large net food importing countries include South Korea, Russia, and Venezuela. Countries that do not receive trade preferences vs. those that do. Certain developing countries are offered nonreciprocal trade preferences into developed country markets. Under multilateral trade liberalization, those preferences may be eroded as overall tariff rates are reduced. As such, countries that do not receive trade preferences may gain a competitive advantage over developing countries that currently participate in preference programs. Potential economic costs associated with erosion of preferential access are difficult to determine, however, given the mixed empirical evidence on program benefits. The IMF notes that erosion of sugar and banana preferences could be a concern. Mauritius, for example, benefits substantially from preferential access for its sugar exports, and Caribbean nations benefit from preferential access for banana exports. Traders vs. non-traders in tariff revenue dependent countries. For countries that are dependent on tariff revenues to finance government operations, the tax burden on importers who pay those tariffs may be relatively high compared with the burden on consumers or domestic industries that pay consumption or production taxes. As tariffs are reduced through trade liberalization, tariff revenues may also be reduced if there is not a sufficient increase in the quantity of imports in response to lower tariff rates. In such cases, the burden of financing government operations may shift away from traders and toward non- traders within an economy. For African least-developed countries that, on average, rely on tariffs for 34 percent of government revenue, the potential distributional consequences from lower trade taxes is likely to be an important adjustment cost to trade liberalization. Exports by Sector (%) Top 50 merchandise exporters in 2003 (in rank order of dollars exported) X (Continued From Previous Page) Exports by Sector (%) Top 50 merchandise exporters in 2003 (in rank order of dollars exported) C(m) Non-WTO members are shaded. X* indicates that the country participated in the group or the action as a member of the European Union. Other members of the Cairns Group are Bolivia, Colombia, Costa Rica, Guatemala, New Zealand, Paraguay, and Uruguay. Other members of the G-20 are Bolivia, Cuba, Egypt, Pakistan, Paraguay, Tanzania, and Zimbabwe. Other members of the G-10 are Bulgaria, Iceland, Liechtenstein, and Mauritius. Other members of the G-33 are Antigua and Barbuda, Barbados, Belize, Benin, Botswana, Congo, Cote D'Ivoire, Cuba, Dominican Republic, Grenada, Guyana, Haiti, Honduras, Jamaica, Kenya, Mauritius, Madagascar, Mongolia, Mozambique, Nicaragua, Pakistan, Panama, Peru, Senegal, St. Kitts and Nevis, St. Lucia, St. Vincent and the Grenadines, Sri Lanka, Suriname, Tanzania,Trinidad and Tobago, Uganda, Zambia, and Zimbabwe. C indicates that Turkey is a candidate for membership in the European Union. In addition to the individuals named above, Michelle Munn, Kendall Schaefer, Emilie Cassou, Ann Baker, Mark Keenan, Jose Martinez-Fabre, Jonathan Rose, Jamie McDonald, and Ernie Jackson made key contributions to this report.
The outcome of ongoing World Trade Organization (WTO) negotiations is vital to the U.S. economy, because trade with WTO members accounts for about one-fifth of the U.S. gross domestic product. The current round of trade negotiations--called the Doha Round--was supposed to end by January 2005 with agreement on the key issues of agriculture, industrial market access, services, and to strengthen the trading system's contribution to economic development. Failure to reach any agreement at the last WTO ministerial meeting in Cancun, Mexico, in September 2003, put the talks behind schedule and threatened the outcome; however, talks resumed in 2004, and a new ministerial conference will convene in Hong Kong in December 2005. In light of these events, and with the impending renewal decision on U.S. Trade Promotion Authority, which streamlines the process by which Congress approves trade agreements, GAO was asked to assess (1) the overall status of the Doha Round negotiations, (2) progress on key negotiating issues, and (3) factors affecting progress toward concluding the negotiations. During 2004, Doha Round negotiations got back on track as trade ministers signed a framework agreement known as the "July package." By committing to eliminate agricultural export subsidies, the agreement's main achievement was to recognize the importance of agriculture in the round and thus reopen talks on other issues. Since this breakthrough, negotiations are picking up momentum, as WTO members are working toward deadlines for more detailed agreements at the December 2005 Hong Kong ministerial conference. Yet despite the improved negotiating atmosphere, the talks are behind schedule, and considerable work remains on the numerous issues that must constitute a final agreement. Progress has been uneven on the six negotiating issues identified as central to the Hong Kong meeting--agriculture, trade facilitation (customs reforms), industrial market access, services, WTO rules, and development issues. The United States has particular reform interests in the first four of these issues. Progress has occurred on two of them: in agriculture, based on agreements in the July framework, and trade facilitation, for which talks have finally been started. However, little progress has been made on industrial market access and services, two other issues of interest to the United States. Several factors could affect progress in the critical period leading up to the December 2005 Hong Kong ministerial. Achieving consensus among the WTO's 148 members is a challenging task, and diverse economic incentives and competing visions add complexity to the negotiations. Cooperation by the United States, the European Union, and some of the developing countries is also seen as key to a successful conclusion before U.S. Trade Promotion Authority expires in mid-2007, an implicit deadline for the talks.
IRS defines the tax gap as the amount of tax that taxpayers owed but have not paid. IRS estimates the individual income tax gap to be $95.3 billion for 1992. Unreported income accounts for a major portion of this tax gap—$58.6 billion or over 60 percent. In the early 1990s, IRS became concerned that its auditors were not fully probing for income that should have been, but was not, reported on tax returns. This concern as well as others led IRS to reemphasize the need for its auditors to consider a taxpayer’s financial status and to probe for unreported income. This reemphasis came to be known as the financial status audit program. IRS initiated the financial status audit program in late 1994 with a training course for auditors. In the training course, IRS stressed the importance of identifying unreported income by determining whether the taxpayer’s reported income roughly conforms to his or her spending. Such an evaluation requires consideration of the taxpayer’s spending patterns in addition to verifying items reported on tax returns. If reported income and spending patterns differ, the auditor is supposed to decide whether the difference is significant enough to warrant asking the taxpayer for an explanation. The training course stressed the importance of meeting with taxpayers, checking nontraditional data sources (such as state and local governments), and using four indirect audit techniques. These four techniques, the cornerstones of financial status audits, are bank deposit analysis, in which the auditor uses the taxpayer’s bank statements to ensure that total deposits are accounted for on the tax return or as nontaxable receipts; net worth method, in which the auditor analyzes changes in the taxpayer’s assets to determine any potential for unreported income; normal markup/unit of sales method, in which the auditor uses the taxpayer’s cost of goods sold and average markups within the industry to estimate business gross receipts; and cash transaction (Cash-T) method, in which the auditor compares the taxpayer’s expenditures to income sources. Under this method, if a taxpayer’s expenditures exceed reported income and the source for such expenditures cannot be explained, the excess represents potential unreported income. The Cash-T method also includes a preliminary Cash-T in which the auditors use only the information available on the tax return to determine whether the expenditures exceeded reported income. The preliminary Cash-T can be completed without contacting the taxpayer for information. The consideration of a taxpayer’s financial status and the use of these techniques to probe for unreported income are not new concepts. Historically, the techniques have been used in fraud and criminal investigation cases, but they have also been available for use by other IRS auditors. IRS officials noted that the use of financial status techniques has been mentioned in the Internal Revenue Manual at least as far back as 1961. According to IRS officials, the 1994 financial status initiative was intended primarily to reemphasize instructions that auditors receive in other IRS training courses. By early 1995, IRS was receiving considerable criticism about audits using these financial status techniques. The American Institute of Certified Public Accountants (AICPA), Members of Congress, and various taxpayer groups were concerned that these audits were more time consuming and intrusive than other auditing techniques. AICPA officials had several concerns about the taxpayer burden and intrusiveness that they associated with IRS’ use of financial status techniques. Specifically, they were concerned about IRS’ practice of asking financial status questions at the initial interview before having any evidence of underreported income. Similarly, AICPA officials were concerned about IRS sending a request for personal living expense (PLE) information with the letter notifying the taxpayer of the audit, before finding any evidence of unreported income. In response to these criticisms, IRS provided additional instructions to its auditors to clarify the intent of financial status audits. Between August 1995 and March 1996, three memoranda were issued from the Office of the Assistant Commissioner (Examination) to Regional Chief Compliance Officers to provide the clarifications. The August memorandum supported the use of financial status techniques but urged auditors to use sound judgment in asking financial status questions at the initial interview, particularly when no indication of underreported income existed. The December 1995 and March 1996 memoranda provided similar instructions, including guidance indicating that PLE forms should not automatically accompany notification letters. AICPA officials acknowledged to us that these instructions helped to reduce some of their concerns, but they said they were still concerned about the added time and intrusiveness associated with IRS’ use of financial status audit techniques. To determine the extent to which IRS’ use of financial status techniques has changed, we selected random samples of audits of individual returns completed before and after IRS began reemphasizing the techniques in 1994. We selected these samples from IRS’ Audit Information Management System (AIMS) database. For the “before” sample, we selected audits that were opened on individuals from October 1991 through October 1992 and closed during fiscal years 1992 and 1993. For the “after” sample, we selected audits that were opened from October 1994 through October 1995 and closed during fiscal years 1995 and 1996. Each sample audit included one or more individual income tax returns. Our sample contained 838 valid audits selected from an estimated population of 977,000 audits. All the numbers used in this report are estimates developed on the basis of weights assigned to the sampled audits so that they represent the population from which we sampled. See appendix I for a more detailed description of our sampling methodology and the procedures used to develop our estimates. We used the IRS workpapers associated with each audit to determine whether and how auditors used the financial status techniques and which type of techniques were used. For each sample audit, using a data collection instrument that we developed, we gathered specific information from the case files about the types of techniques used, amounts of any adjustments to taxable income and tax liability, types of questions asked the taxpayers, and information about both the auditor and taxpayer. We also met with National Office officials responsible for implementing the financial status program to discuss our sampling methodology and results. We did not determine whether IRS’ auditors made appropriate choices in deciding when to use financial status techniques and what techniques to use because IRS had no specific criteria against which to make this judgment. To obtain information on how the use of financial status techniques increased the need for taxpayer contact and might have affected the taxpayer, we again used data from the audit workpapers. We collected information from the case files on the types of techniques being used, whether or not Cash-Ts were preliminary or comprehensive, the nature of the taxpayer contacts, the types of questions asked at initial interviews, and whether or not IRS requested PLE information when first notifying the taxpayer of the audit. Additionally, we met with IRS’ National and Field office officials to learn how each technique was used. As part of our work on this issue, we discussed the financial status program with officials at AICPA. These officials raised several concerns about IRS’ use of financial status techniques and the whole approach to audits resulting from the emphasis on the techniques. To the extent possible, we used our sample data to evaluate these concerns. To determine the results of audits using financial status techniques, we used the samples and workpapers previously discussed. For each audit, we recorded the adjustments to income and additional taxes found on all returns. We also recorded the amount of the changes to income attributable to the use of one or more of the financial status techniques. To determine how IRS applied its audit standards, quality controls, and quality measurement to the use of financial status techniques, we met with officials in the Examination Division, including the Quality Measurement staff, at the National Office and four district offices. We also discussed quality review procedures with group managers at the district offices. We obtained copies of the audit standards and reviewed their applicability to the financial status program. Otherwise, we did not evaluate the adequacy of the standards. We reviewed IRS’ Examination Quality Measurement System (EQMS) to determine how IRS measures audit quality and what the measures show. At three of the four district offices, we examined several EQMS cases, selected by IRS personnel, to see how EQMS reviews were done. We did not examine any cases at the Philadelphia district office because all EQMS reviews in that region are done at another district office. We did not try to assess the accuracy of EQMS reviews. (See appendix II for a summary of IRS’ audit standards.) We requested comments on a draft of this report from the Commissioner of Internal Revenue. On November 20, 1997, we received written comments from IRS, which are summarized at the end of this letter and are reproduced in appendix IV. These comments have been incorporated into the report where appropriate. We performed our audit at IRS headquarters offices in Washington, D.C., and at district offices and service centers in Fresno and Oakland, CA; Baltimore, MD; Philadelphia, PA; and Richmond, VA. Our work was done between October 1996 and August 1997 in accordance with generally accepted government auditing standards. IRS’ renewed emphasis on financial status audit techniques produced little, if any, change in how often these techniques were used. Comparing audits done before and after IRS’ emphasis on financial status, we estimated that the use of one or more of the financial status techniques was 24 percent for the 1992 and 1993 period and 22 percent for the 1995 and 1996 period. The difference in these percentages is not statistically significant. During both periods, financial status techniques were used predominately on returns involving business or farm income. IRS research has found that taxpayers with these types of income are more likely to underreport income than taxpayers whose income is reported by third parties on information returns. Table 1 compares the two periods we reviewed. IRS managers were concerned that auditors were not making use of techniques to identify unreported income. The financial status program and the associated training was designed to correct this problem. IRS officials could not tell us why the percent of financial status audits had not changed after the reemphasis and training. However, they noted that one reason may have been because of the limited amount of follow-up training provided by the districts and the limited amount of National Office oversight due to IRS’ reorganization activities after the initial training. In commenting on our draft report, IRS officials indicated that the financial status training focused less on increasing the use of a specific technique and more on improving the auditors’ ability to identify unreported income. We also analyzed whether IRS changed the types of techniques being used. We found no significant change in usage by type of financial status technique since the reemphasis. Generally, only two techniques were used, often in combination, during our two sample periods. Table 2 describes the results of this analysis. To put the data presented in tables 1 and 2 in perspective, in 1995, about 116 million taxpayers filed their 1994 individual income tax returns. On the basis of historical data and information from our sample, we estimate that between 126,000 and 183,000 will receive an audit that uses at least one of the four financial status techniques during the 3 years before the statute of limitations expires. Financial status audit techniques vary in the extent of additional taxpayer contact needed and the amount of information being sought from taxpayers. IRS has no data showing how much additional taxpayer contact is associated with each technique or how intrusive the additional information needed might be. However, we were able to make some general observations based on our review of the workpapers. For example, the Cash-T method can be separated into two types—preliminary and comprehensive. In the preliminary Cash-T, the auditor uses only information available on the tax return to identify any indications of unreported income. This technique, therefore, requires no additional contact with the taxpayer. Of the estimated 126,000 to 183,000 audits of tax year 1994 individual returns in which IRS used a financial status technique, we estimated that between 29,000 and 42,000 of these audits (23 percent) only used a preliminary Cash-T, requiring no response from the taxpayer. The comprehensive Cash-T and each of the other techniques require some additional taxpayer contact. The amount of contact required and information sought can vary with each taxpayer and the type of financial status technique used. In a comprehensive Cash-T, the auditor needs information from the taxpayer on nonreturn items such as cash on hand, savings, and PLE. For a bank deposit analysis, the auditor requires access to the taxpayer’s bank account records and may require considerable taxpayer contact to ask the taxpayer to explain significant discrepancies between total deposits and the income shown on the tax return. The net worth and normal markup methods require taxpayer contact primarily to explain any identified discrepancies. AICPA has been among the critics of IRS’ reemphasis on financial status audits since the program began in late 1994, claiming that IRS auditors use the techniques without having any evidence that taxpayers have underreported income. Such intrusions into taxpayer’s spending patterns could occur at two points—(1) before the initial interview and (2) during the initial interview. Critics suggested that such intrusions increased after the 1994 initiative. Using the data gathered from our reviews of IRS’ audit workpapers, we looked at the frequency of the two concerns. We gathered information on how often IRS used the initial notification letter to request that the taxpayer provide PLE information. We found no significant difference between the 1992 and 1993 period (before the reemphasis on financial status) and the 1995 and 1996 period (after the reemphasis). During both periods, less than 5 percent of the initial notification letters to the taxpayers also requested that they provide information on their PLE. Recognizing the potential for intrusiveness, the Acting Assistant Commissioner (Examination) in a March 1996 memo, clarified the PLE instructions. The memo indicated that while auditors had the responsibility to secure an overall financial picture of the taxpayer, they were not expected to automatically request PLE information with the notification letter. According to AICPA officials, sending PLE forms with the notification letters has decreased since the distribution of this memo. We also gathered information on the types of questions IRS auditors asked taxpayers at opening interviews. Financial status critics believe that questions designed to determine the taxpayer’s financial status were inappropriate unless IRS had evidence that the taxpayer had underreported income. AICPA officials provided a list of the questions, which focused on personal spending habits such as how often a taxpayer eats at restaurants and where a taxpayer vacations. Based on our analysis of the documents in the case files, most of these interview questions occurred in fewer than 5 percent of the audits. For the 1995 and 1996 sample period, only four of the questions were asked during the initial interview in over 10 percent of the audits. In addition, the frequency in which the questions were asked was about the same in our samples of audits for 1992 and 1993 and for 1995 and 1996. Appendix III provides information about the specific questions and how often they were asked. The results of using financial status techniques have been mixed. The use of the techniques resulted in IRS auditors identifying large amounts of unreported income in some cases. At the same time, a high percentage of audits resulted in no adjustments to reported income attributable to the use of financial status techniques. Table 3 summarizes these results. IRS reemphasized the use of financial status techniques to address its concerns with finding unreported income. In the audits we reviewed in our 1995 and 1996 sample, we estimated that auditors used financial status techniques to identify unreported income totaling over $300 million. Our review of the IRS workpapers indicated that the auditors were unlikely to have identified unreported income without using the techniques. The workpapers did not show that this income was reported on an information return or identified by the taxpayer, the other two primary techniques used to verify the accuracy of reported income. However, table 3 shows that the use of financial status techniques has resulted in no adjustments to income in a significant number of cases. For example, in our 1992 and 1993 sample, 81 percent of the audits using financial status techniques resulted in no adjustments to reported income attributable specifically to the techniques. Similarly, for the 1995 and 1996 sample, 83 percent resulted in no adjustment to reported income attributable to the use of the techniques. Audits having no change attributable to the use of financial status audit techniques may have had changes attributable to other audit techniques. These no-change audits were closed with either (1) no changes to any tax issue or (2) changes such as reducing claims for a tax deduction, exemption, or credit after the auditor reviewed the taxpayer’s documentation. For the 1992 and 1993 audits having an 81 percent no-change rate, 23 percent had no change for any reason and 58 percent had changes to taxable income that were not attributable to the use of financial status techniques. For 1995 and 1996, the 83 percent no-change rate breaks out as 28 percent with no change for any reason and 55 percent with changes to taxable income that were not attributable to the use of financial status techniques. This high percentage of no change attributable to the use of financial status techniques raises issues about whether IRS can further help auditors in judging when and how to use these techniques. Given the complexity of the tax code and the fact that tax return forms provide for limited, if any, explanation of the numbers entered by the taxpayer, it is not reasonable to expect an adjustment every time a financial status technique is used nor is it desirable that all auditor judgment be removed from the decision about when to use the techniques. It is important, however, that IRS make the most effective and efficient use of its limited resources while striking an appropriate balance between collecting information and evidence to assist the auditor in identifying the correct tax, and avoiding unnecessary burden and intrusiveness for the taxpayers. Thus, the best interest of both IRS and the taxpayers is achieved when the no-change rate is at some acceptable low point. To this end, we believe that more specific criteria on when to use financial status techniques would provide auditors with additional context around which to exercise their professional judgment on a case-by-case basis, and would likely result in a reduced no-change rate. IRS has three primary tools to oversee use of financial status audit techniques: (1) audit standards to guide auditors, (2) supervisory review of auditors’ adherence to the standards, and (3) a system to measure adherence to the standards. Our analyses focused on how IRS applied these tools to the use of financial status audit techniques. While these tools offered important controls over the use of the financial status techniques, they each have limitations. For example, the audit standards do not guide auditors on when and when not to use financial status techniques. IRS’ managers at the group level review a small portion of the audits because of a lack of time caused by other duties. IRS’ measurement system, like the standards, focused on whether financial status techniques, when used, were used correctly from a technical perspective, not on when to use the techniques and to what degree. IRS uses its audit standards, which have evolved since the 1960s, to define audit quality. However, the standards do not offer specific criteria to guide auditors on when and when not to use financial status techniques and to what degree. Instead, the standards focus on whether actions were taken and, if so, whether they were taken correctly from a technical perspective. IRS uses nine audit standards to address the scope, audit techniques, technical conclusions, workpaper preparation, reports, and time management of an audit. Each standard is composed of key elements that operationally define a quality examination. IRS guidance stipulates that for a standard to be rated as being “met,” each of the key elements must be rated as “met” or “not applicable.” The standards and the associated key elements are summarized in appendix II. Of the nine audit standards, Standard 2, Probes for Unreported Income, has four key elements that address whether the auditor (1) considered the adequacy of internal controls, (2) considered the types of books and records maintained, (3) considered the taxpayer’s financial status, and (4) appropriately used indirect audit techniques to probe for unreported income. These last two elements directly address financial status analyses and audit techniques. Under Standard 2, auditors are instructed to consider financial status in all audits and only use a financial status audit technique when they suspect unreported income. However, IRS did not provide specific criteria in the standards to help auditors decide when unreported income is likely. The key element for evaluating appropriate use of these techniques addressed whether the auditor considered using a technique, selected the appropriate technique, and applied it correctly. Nothing in the standard provides the auditor with specific criteria to determine when to use or not use a given technique or to what degree to use it. For example, IRS has not instructed auditors on how extensively to consider a taxpayer’s financial status and when that consideration should prompt the use of a technique to probe for unreported income. Nor has IRS instructed auditors on how large a discrepancy between reported income and expenses should be to justify more in-depth probing. On the basis of our review of the audit workpapers, we believe that this lack of criteria has probably contributed to the large percentage of audits in which the use of financial status techniques resulted in no adjustments to income. During the course of our work, IRS agreed with us that it needs specific criteria to better guide its auditors on using the financial status techniques. According to an IRS official, sections of the Internal Revenue Manual are being revised to better instruct tax auditors and revenue agents about when and when not to use financial status techniques and to what degree to use them in probing for unreported income. In September 1997, we received a draft of the revised manual sections. Our initial review of these revised instructions indicated that they offered some guidance on when to use financial status techniques but did not provide specific criteria. For example, the revisions indicate that if a preliminary analysis yields a Cash-T that is materially out of balance, the auditor should use subsequent interviews and information gathering to resolve the imbalance. The instructions define “material imbalance” as the significance of an item in determining the correct tax liability. The instructions require auditors to use their judgment on the return as a whole and the items that comprise that return. In using their judgment on whether the imbalance is material, the auditors must consider such factors as the comparative and absolute size of the imbalance as well as the relationship between the size of the imbalance and the tax liability. However, IRS has not provided instructions to guide the auditor when analyzing the comparative or absolute size of the imbalance or when comparing the relationship of the imbalance to the tax liability. In commenting on a draft of this report, IRS officials said it would be impractical to develop specific quantitative criteria to define materiality. We acknowledge that developing quantitative criteria to cover every situation is difficult and that auditors’ judgment is still an important element of any audit. However, we believe that the concept of “material imbalance” could be made more specific by developing some quantitative criteria that would use the preliminary Cash-T and establish thresholds for the factors associated with an imbalance between reported income and estimates of PLE, such as the comparative size of any imbalance. If the preliminary Cash-T indicated that the income reported on the tax return that was available for PLE was below the threshold—that is, apparently not sufficient to support the living expenses indicated—the auditor would be expected to conduct a more detailed probe for unreported income, potentially using one or more of the other financial status techniques. If the preliminary Cash-T showed the taxpayer’s reported income to be above the threshold—that is, apparently sufficient to support the estimated PLE—using the other financial status techniques would not be expected. In either case, the auditor could decide to go against the criteria but would be expected to explain the reasons in the workpapers. Developing such criteria would be an on-going task, as changes would likely occur as IRS gained experience about how well the criteria were working. The primary tool used by IRS to control quality is the review of audit files by managers of audit groups. The Internal Revenue Manual requires supervisory review of cases but is vague on exactly when review is necessary and how it should be documented. According to IRS Examination officials, IRS managers cannot review all audits. Rather, the managers must rely on the experience and judgment of the auditors because the manager’s audit workload and other duties limit the time available for review. Further, these officials said budget constraints will likely cause the managers’ span of control to increase rather than decrease in the future, resulting in more audits to oversee. The analysis of our sample supports IRS’ assertions that not all audits are reviewed by managers. We found evidence of supervisory review in about 9 percent and 6 percent of the audits for 1992 and 1993 and 1995 and 1996, respectively. In the districts we visited, the managers acknowledged that they can only review a small portion of all ongoing and closed audits for each auditor annually because of the reasons cited. Managers told us they try to spend more time reviewing the work of the least experienced auditors. At a minimum, they said they try to maintain an ongoing discussion with all auditors about their audit inventories. IRS conducts post-audit quality measurement through EQMS reviews. EQMS is IRS’ mechanism for collecting information about the audit process, changes to that process, the level of audit quality, and the success of any efforts to improve the process and quality. The Office of Compliance Specialization, within IRS’ Examination Division, has responsibility for this program. This office compiles and maintains a national database of the quality reviews done at the district level. This database can be used to identify trends by district and nationally. Of the 800,000 face-to-face audits done by IRS in fiscal year 1996, EQMS staff reviewed a sample of 12,170 audits to measure quality against the nine audit standards. According to IRS officials, this sample provided a statistically valid basis for measuring audit quality. EQMS staff reviewed the 12,170 audits to determine whether the auditors met the criteria for each of the auditing standards. For fiscal year 1996, the percentages of audits that were rated as having met the standards ranged from 38 percent for Standard 9, Time Span/Time Charged, to 95 percent for Standard 5, Findings Supported by Law. (App. II summarizes EQMS results since fiscal year 1992.) Before fiscal year 1997, IRS did not collect data on the reasons key elements were not met. Starting in fiscal year 1997, however, IRS began collecting these data. For the first 2 quarters of fiscal year 1997, reviewers looked at 2,904 office audits and 2,859 field audits. Of these audits, IRS rated 84 percent and 78 percent of the office and field audits, respectively, as having met (i.e., passed) the key element under Standard 2 that involves the consideration of financial status. Further, 74 percent and 82 percent of these office and field audits, respectively, were rated as having met the key element under Standard 2 that involves the appropriate use of financial status audit techniques. Table 4 summarizes the EQMS-determined reasons auditors did not meet these key elements of Standard 2. For example, the most frequent reasons cited were that auditors did not (1) provide evidence that they had evaluated financial status, (2) recognize the need to use one of the financial status techniques, and (3) correctly compute the financial status technique. Knowing the reasons for not meeting the key element or the standard can provide insights on when the use of the financial status techniques would and would not be necessary to identify unreported income. However, the reasons identified by IRS, like the criteria in the audit standard on probing for unreported income, have not addressed the issue of when and when not to use financial status techniques and to what degree they should be used. Without this information, IRS cannot fully measure the quality of audits involving financial status techniques. IRS auditors have used financial status audit techniques for years to help identify unreported income. IRS’ renewed emphasis on the use of these techniques appears to have had little impact on how frequently auditors used them. Also, neither the type of technique nor the type of return on which they are used has changed to any statistically significant degree. IRS has not measured how the use of financial status techniques may add to the burden and intrusiveness of audits. Use of the preliminary Cash-T technique added no burden because this technique does not require additional taxpayer contact. Use of the other financial status techniques require some degree of taxpayer contact. The amount of contact and the amount of additional information sought from the taxpayer, however, can vary with each situation. The results of using financial status techniques were mixed. In a large majority of such audits, no adjustments to income could be attributed specifically to the techniques. While it is not reasonable to expect unreported income to be found every time these techniques are used, the current rate of no adjustments seems high. However, in the remaining audits, the use of the techniques helped auditors to find unreported income that probably would not otherwise have been detected. This detection capability and the high frequency of no adjustments to reported income raises the issues of how to decide when and when not to use financial status techniques and to what degree they should be used. Currently, auditors’ judgment primarily dictates these decisions because IRS does not provide the auditors with specific guidance for determining whether to use financial status audit techniques. While an auditor’s judgment is likely to continue to constitute a significant portion of the decisionmaking process, guidance, in the form of specific criteria, might help reduce the frequency in which these techniques are used but do not result in adjustment to income. Similarly, supervisory review of audits to guide the auditors’ performance, a key piece of IRS’ quality control system, was limited by workload constraints and when done, seldom addressed the use of financial status techniques. Finally, IRS staff reviewed some closed audits for quality through EQMS, but like the audit standards, these reviews did not focus on when and when not to use financial status techniques and to what degree to use them. Without establishing specific criteria to guide the usage of financial status audit techniques, IRS does not have a good basis for evaluating the auditors’ judgment in choosing to use or not use the techniques. We believe that such criteria would help IRS auditors make their decisions. Given that our tax system is based on voluntary compliance, an appropriate balance must be maintained between collecting information to assist the auditor in identifying the correct tax and avoiding unnecessary burden and intrusiveness for the large majority of taxpayers. More specific criteria to use in making case-by-case decisions about when and to what extent to use financial status audit techniques would be helpful to auditors in achieving that balance. Developing such criteria, however, would have to be considered a work in progress, with changes and updates occurring as needed when auditors and managers become more experienced with their use. During the course of our work, IRS agreed that it needs more specific criteria to guide its auditors in exercising their judgment to use the financial status techniques and began developing instructions that include such criteria to be included in the Internal Revenue Manual. To provide better assurance that financial status techniques are not overly burdensome and intrusive to taxpayers and that the most productive use is made of limited audit resources, we recommend that the Commissioner of IRS further pursue efforts to develop more specific criteria on when and to what extent to use financial status techniques. To help develop and refine these criteria, we recommend that the IRS Commissioner ensure that these specific criteria on using the techniques are reflected in the instructions for interpreting the audit standards and the evaluations through EQMS and its reason codes of how well audits meet these standards; monitor the use of financial status techniques under the new criteria to identify factors associated with successful and unsuccessful usage in terms of when and to what extent to use the techniques as well as whether the usage identified unreported income and if so, in what amounts; and use these monitoring results to evaluate whether to make further revisions to the criteria on using the techniques or in the system by which IRS monitors their use. We obtained comments on a draft of this report at a meeting on November 12, 1997, with officials who represented IRS. These officials included the Chief Compliance Officer, the Assistant Commissioner for Examination and members of his staff, the National Director of Compliance Specialization and members of his staff, and a representative from IRS’ Office of Legislative Affairs. The Deputy Commissioner also documented these comments in a letter dated November 20, 1997 (see app. IV). In general, IRS agreed with the substance of our report. It provided technical comments to clarify specific sections of the report. These comments dealt with issues such as the status and nature of the instructions being developed on using financial status techniques and IRS’ position on intrusiveness of the techniques and on training. We have incorporated these comments into the report where appropriate. Concerning the recommendations in our report, IRS agreed with our overall recommendation on developing more specific criteria to guide auditors in using financial status techniques and generally agreed with the three recommendations we made to help with this development. IRS officials fully agreed to implement all of our recommendations by October 1998, as reflected in IRS’ letter of November 20, 1997. We are sending copies of this report to the Committee’s Ranking Minority Member, the Chairman and Ranking Minority Member of the Senate Committee on Finance, various other congressional committees, the Director of the Office of Management and Budget, the Secretary of the Treasury, and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you have any questions concerning this report, please contact me at (202) 512-9110. This appendix describes the methodology we used to sample Internal Revenue Service (IRS) audits from 1992 and 1993 and from 1995 and 1996. We used these samples to quantify the differences in audit practices before and after IRS began its reemphasis on using the financial status techniques and to estimate the results of these audits. IRS reemphasized its financial status program late in fiscal year 1994. To determine whether financial audit practices and results had changed, we compared audits within IRS’s Audit Information Management System (AIMS) database that were completed before and after the reemphasis in 1994. We restricted our study population to audits of books and records that IRS conducted at district offices. This meant that we excluded limited-scope audits initiated solely to assess an additional tax, resulting from an audit of a partnership or corporation, audits opened as part of IRS’ nonfiler compliance initiative, audits of taxpayer claims, and substitutes for returns in which IRS prepares a return for a nonfiler. We expected that financial status techniques would have the potential to be used on the audits we included. To identify audits that were completed before auditors were exposed to the emphasis on financial status, we restricted the pre-1994 study population to the estimated 566,268 audits that had begun in the period from October 1, 1991, to October 31, 1992, and were completed by September 30, 1993. To identify the most current audits subsequent to the emphasis on financial status, we restricted the post-fiscal year 1994 study population to the estimated 421,039 audits that had begun in the period from October 1, 1994, to October 31, 1995, and were completed by September 30, 1996. We selected a probability sample of audited tax returns from each of the two time periods. We then obtained information about the audits by reviewing IRS’s workpapers. To obtain the sample of audits of books and records, we selected a stratified, probability sample of 1,232 tax returns from among all returns audited in district offices by revenue agents and tax auditors within the fiscal years 1992, 1993, 1995, and 1996 study periods. The samples were drawn for 1992 and 1993 and for 1995 and 1996. The audit associated with each selected tax return included all returns of a taxpayer that had been completed during the study periods. As two of the sampled returns were associated with the same audit, the initial sample of 1,232 returns resulted in a sample of 1,231 audits. These returns were stratified by year, income, and type of return as shown in table I.1. The division of the population and sample of audits between different types of returns is shown in Table I.2. The low income, high income, and business columns contain audits associated with one or more returns from a single sample strata. The mixed category contains the audits that included returns from more than one of the tax-return strata. Table I.2 also indicates that IRS could not locate IRS audit workpapers for the 187 audits and that of the 1,044 audits for which workpapers were located, 838 were eligible for our study because they were books and records audits. The final sample for our analyses of these audits in this report are the 838 audits identified in the next to last row of table I.2. Type of return (opened 1992) Type of return (opened 1995) The items in the AIMS database that served as our sampling frame are individual tax returns, not audits. Because an audit can include multiple tax returns, the effect of multiple returns has been incorporated in the weighting of the sampled audits in the analysis. The weights and sampling errors have been calculated using a multiplicity estimator in which each sampled audit is weighted to account for the total number of associated returns in the AIMS sampling frame. The results shown in this report are estimates because they are based on the sample of audits drawn from the total population of all eligible audits. The accuracy of these estimates is quantified by their sampling errors, expressed as 95-percent confidence intervals. In table I.3, for example, the estimate that 24 percent of the 1992 audits used a financial status audit technique is surrounded by a confidence interval of + 5 percentage points, indicating that we are 95 percent confident that the actual percentage in the population of all audits lies between 19 and 29 percent. The comparison column of the same table indicates that the difference of 3 percent between the 1992 and 1995 samples is surrounded by a 95-percent confidence interval of + 6 percentage points, indicating that we are 95 percent confident that the difference between the 1992 and 1995 audits lies between –3 and +9 percent. Since, in this instance the 95-percent confidence interval included the possibility that there is no difference, we conclude that the estimated difference of 3 percent is not statistically significant. In addition to the reported sampling errors, various obstacles can occur when conducting this type of review and may cause other types of errors, commonly referred to as nonsampling errors. For example, differences in how questions are interpreted and errors in entering data could affect the results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. These steps involved the 100 percent review of completed data collection instruments (DCI) and data entry of those DCIs, and checking all computer analyses with a second analyst. Tables I.3 through I.5 describe our point estimates for the analysis of financial status audits and the related sampling errors. Initial interview questions the AICPA considers inappropriate (as a percent of all audits with initial interviews documented) (continued) Percentages are less than 0.5 percent. Percentages are less than 0.5 percent. The Office of Compliance Specialization, within IRS’ Examination Division, has responsibility for Quality Measurement Staff operations and the Examination Quality Measurement System (EQMS). Among other uses, IRS uses EQMS to measure the quality of closed audits against nine IRS audit standards. The standards address the scope, audit techniques, technical conclusions, workpaper preparation, reports, and time management of an audit. Each standard includes additional key elements describing specific components of a quality audit. Table II.1 summarizes the standards and the associated key elements. Table II.1: Summary of IRS’ Examination Quality Measurement System Auditing Standards (as of October 1996) Measures whether consideration was given to the large, unusual, or questionable items in both the precontact stage and during the course of the examination. This standard encompasses, but is not limited to, the following fundamental considerations: absolute dollar value, relative dollar value, multiyear comparisons, intent to mislead, industry/business practices, compliance impact, and so forth. Measures whether the steps taken verified that the proper amount of income was reported. Gross receipts were probed during the course of examination, regardless of whether the taxpayer maintained a double entry set of books. Consideration was given to responses to interview questions, the financial status analysis, tax return information, and the books and records in probing for unreported income. Measures whether consideration was given to filing and examination potential of all returns required by the taxpayer including those entities in taxpayer’s sphere of influence/responsibility. Required filing checks consist of the analysis of return information and, when warranted, the pick-up of related, prior and subsequent year returns. In accordance with Internal Revenue Manual 4034, examinations should include checks for filing information returns. (continued) Measures whether the issues examined were completed to the extent necessary to provide sufficient information to determine substantially correct tax. The depth of the examination was determined through inspection, inquiry, interviews, observation, and analysis of appropriate documents, ledgers, journals, oral testimony, third-party records, etc., to ensure full development of relevant facts concerning the issues of merit. Interviews provided information not available from documents to obtain an understanding of the taxpayer’s financial history, business operations, and accounting records in order to evaluate the accuracy of books/records. Specialists provided expertise to ensure proper development of unique or complex issues. Measures whether the conclusions reached were based on a correct application of tax law. This standard includes consideration of applicable law, regulations, court cases, revenue rulings, etc. to support technical/factual conclusions. Measures whether applicable penalties were considered and applied correctly. Consideration of the application of appropriate penalties during all examination is required. Measures the documentation of the examination’s audit trail and techniques used. Workpapers provided the principal support for the examiner’s report and documented the procedures applied, tests performed, information obtained, and the conclusions reached in the examination. Measures the presentation of the audit findings in terms of content, format, and accuracy. Addresses the written presentation of audit findings in terms of content, format, and accuracy. All necessary information is contained in the report, so that there is a clear understanding of the adjustments made and the reasons for those adjustments. Measures the utilization of time as it relates to the complete audit process. Time is an essential element of the Auditing Standards and is a proper consideration in analyses of the examination process. The process is considered as a whole and at examination initiation, examination activities, and case closing stages. EQMS quality reviewers use the key element definitions to determine whether an audit adhered to the standard. Thus, adherence to audit quality is measured by the presence or absence of associated key elements. For a standard to be rated as having been met, each of the associated key elements must also be rated as met or not applicable. If the audit does not demonstrate the characteristics described by one of the key elements, then the standard is rated as not met. One measure that IRS uses to evaluate the audit quality is the standard success rate. It measures the percentage of cases for which all the underlying key elements of each standard are rated as having been met. According to IRS, this measure is useful for determining whether a case is flawed and in what area. Figures II.1 and II.2 show the standard success rates for each of the standards for fiscal years 1992 through 1996 for office and field audits, respectively. IRS also uses the key element pass rate as a measure of audit quality. This measure computes the percentage of audits demonstrating the characteristics defined by the key element. According to IRS, the key element pass rate is the most sensitive measurement and is useful when describing how an audit is flawed, establishing a baseline for improvement, and identifying systemic changes. Figures II.3 and II.4 show the pass rates for the key elements of Standard 2 for fiscal years 1992 through 1996 for office and field audits, respectively. The American Institute of Certified Public Accountants (AICPA) has been among the critics of IRS’ reemphasis on financial status audits since the program began in late 1994. During 1995 and 1996, officials from IRS and AICPA met several times to discuss these concerns and, to some extent, IRS mitigated the problems with memos clarifying the use of financial status techniques. AICPA has had a long list of concerns about actions taken by IRS auditors, including sending a personal living expense (PLE) form with the letter notifying taxpayers of the audit before finding any evidence of underreported income; asking financial status questions at the initial interview, before having any evidence of underreported income; arriving unannounced to inspect a personal residence; bypassing a valid power of attorney and requesting information or records directly from taxpayers; interviewing taxpayers without the presence of their representative; and requiring taxpayers’ representative to submit a freedom of information request to obtain third-party documents on their clients. Neither AICPA or IRS had any objective data on these concerns. Using our sample, however, we were able to collect data on the first two concerns involving PLE forms and financial status questions. As for the PLE forms, AICPA indicated that some audit notification letters asked taxpayers to complete this form even though IRS had no evidence of underreported income. AICPA officials believed this request was intrusive, burdensome, and costly to taxpayers. The officials said PLE information should be requested only after IRS had some objective evidence that taxpayers had underreported income on tax returns. In reviewing the workpapers for our two samples, we looked for copies of notification letters. We found very few examples in which the letters asked taxpayers to complete a PLE form. On the basis of our sample, we estimate that IRS used the notification letter to request PLE forms in no more than 5 percent of the audits for both the 1992 and 1993 and the 1995 and 1996 samples. In a March 1996 memorandum, the Acting Assistant Commissioner (Examination) clarified the PLE instructions. The memorandum indicated that while auditors had the responsibility to secure an overall financial picture of the taxpayer, they were not expected to automatically request PLE information with the notification letter. According to AICPA officials, sending PLE forms with the notification letters has decreased since the distribution of this memorandum. AICPA officials were also concerned that auditors were asking personal questions about the taxpayer’s financial status at the initial interview before having any evidence of underreported income. Auditors use the initial interview to explain the audit process, the taxpayer’s rights, and gain an understanding of the taxpayer’s situation. Generally, auditors prepare workpapers to summarize these interviews. We reviewed these interview write-ups and collected data on the types of questions asked by auditors at this meeting. AICPA officials identified questions that caused them concern. We collected information on whether the auditors asked these questions both before and after IRS began reemphasizing financial status. We compared these two periods because AICPA had associated the questions with the renewed emphasis by IRS on financial status audits, and the 1992 and 1993 period was just prior to this renewed emphasis. Table III.1 shows how often auditors asked these questions at initial interviews in 1992 and 1993 and in 1995 and 1996 audits. As shown in table III.1, with few exceptions, little difference exists in how often these questions were asked at initial interviews in 1992 and 1993 and in 1995 and 1996 audits. In his March 1996 memorandum to Regional Chief Compliance Officers, the Acting Assistant Commissioner (Examination) provided general guidance on how far to probe for unreported income at the initial interview. He emphasized that auditors must evaluate the facts and use judgment. The memo further stated that performing in-depth income probes and asking questions about personal assets and expenditures were not effective uses of resources without a reasonable indication of unreported income. The following are GAO’s comments on the Internal Revenue Service’s letter dated November 20, 1997. 1. IRS suggested that we change the title of the report to respond to the first objective of our work and suggested a title that would point out that IRS has not increased the use of financial status techniques. IRS believed that by focusing on the need for more criteria, readers of the report would infer that IRS was being unnecessarily intrusive. We considered changing the title but decided against it for various reasons. First, our report already discussed the issue of intrusiveness, pointing out that use of the techniques did not necessarily mean intrusions into taxpayers’ affairs, particularly when such usage identified changes to reported income. Second, such a title would ignore the other three objectives of our report. We concluded that the focus on the need for more criteria not only could be associated with all four objectives but also with the actions needed to prompt improvements. 2. IRS said that the report cited no evidence of any increased intrusiveness and that the fact that use of the techniques led to no tax change does not diminish Examination’s responsibility to determine the correct tax liability. We believe that IRS misinterpreted our discussion of intrusiveness. In the draft report, we noted that the reason for no evidence of intrusiveness was that it was not available from IRS or others. We observed, however, that only the preliminary Cash-T results in no additional burden on the taxpayer, while the burden imposed through the use of other techniques varies depending on the amount of additional taxpayer contact. Also, our draft did not say that there is any relationship between the no change rate and IRS’ responsibility to determine the correct tax liability. Accordingly, we made no changes to the report to reflect these comments. Louis G. Roberts, Evaluator-in-Charge Kathleen E. Seymour, Senior Evaluator Samuel H. Scrutchins, Senior Data Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Internal Revenue Service's (IRS) use of financial status audit techniques to: (1) estimate how frequently IRS used financial status audit techniques in audits closed in tax years prior to the 1994 initiative (1992 and 1993) and in tax years following the 1994 initiative (1995 and 1996); (2) consider how IRS' need to contact taxpayers for additional taxpayer information when using financial status techniques might intrude on taxpayers; (3) estimate the audit results from using financial status audit techniques in terms of the amount of adjustments to reported income; and (4) determine how IRS applied its audit standards, quality controls, and measurement of audit quality to the use of financial status techniques. GAO noted that: (1) on the basis of its review of samples of IRS audits completed before and after IRS reemphasized the use of financial status techniques, GAO found no statistically significant change in the frequency with which these techniques were used or in the types of returns for which the techniques were used; (2) during both periods, over 75 percent of the audits using financial status techniques involved individual returns with business or farm income--the types of taxpayers that IRS has historically found to be the most likely to underreport income; (3) financial status audit techniques vary in the need for taxpayer contact and how much additional burden or intrusiveness may be perceived by the taxpayer; (4) financial status audits have been criticized by tax professionals and others for, among other things, seeking information about financial status without having evidence of unreported income; (5) such intrusions into taxpayers' spending patterns could occur before the initial interview and during the initial interview; (6) IRS used the Personal Living Expense (PLE) form to inquire about expenses at the time of the notification letter in fewer than 5 percent of the audits for both the 1992 and 1993 and 1995 and 1996 periods; (7) the case files showed that auditors infrequently asked intrusive, financial status type questions at the initial interview; (8) concerning the results, auditors made no adjustments to the individual's reported income attributable to the use of financial status audit techniques in 83 percent of the audits in which these techniques were used; (9) IRS has three tools to oversee the use of financial status audit techniques: (a) audit standards to guide auditors; (b) supervisory review of auditors' adherence to the standards; and (c) a system to measure adherence to the standards; (10) while these tools offered important controls over the use of the financial status techniques, they each have limitations; and (11) on the basis of GAO's review of IRS audit workpapers, the lack of specific criteria may have contributed to the relatively large percentage of audits in which the use of financial status audit techniques resulted in no adjustments to income.
Sulfur dioxide and nitrogen oxides have been linked to a variety of health and environmental concerns, and carbon dioxide has been linked to global warming. For example, sulfur dioxide and nitrogen oxides contribute to the formation of fine particles, and nitrogen oxides contribute to the formation of ozone. Both fine particles and ozone have been linked to respiratory illnesses. For example, fine particles have been linked to premature death, aggravated asthma, and chronic bronchitis, while ozone can inflame lung tissue and increase susceptibility to bronchitis and pneumonia. In addition to affecting health, sulfur dioxide and nitrogen oxides reduce visibility and contribute to acid rain, which harms aquatic life and degrades forests. Carbon dioxide has been linked to increases in air and ocean temperatures. Such climate changes, by the end of the century, could cause rising sea levels, droughts, and wind and flood damage, according to the National Academy of Sciences. Electricity generating units that burn fossil fuels, along with other stationary sources (such as chemical manufacturers and petroleum refineries), and transportation sources (such as cars) emit one or all of these substances. Figure 1 compares emissions of sulfur dioxide, nitrogen oxides, and carbon dioxide from fossil-fuel units to those from other sources in 1999, the most recent year for which data for all three substances were available. While the overall proportion of each substance emitted by fossil-fuel units varied—from 67 percent of all sulfur dioxide to 23 percent of all nitrogen oxides—these units emitted more of each substance than any other industrial source in 1999. Under the Clean Air Act, EPA establishes air quality standards and regulates emissions from a number of sources, including electricity generating units that burn fossil fuels. The act required EPA to issue regulations establishing federal performance standards for new sources of air pollution within certain categories of stationary sources. Accordingly, EPA issued new source standards for certain generating units with a capacity greater than 73 megawatts that were built or modified after August 17, 1971. Over time, EPA has made the standards more stringent, subjecting other types of units and those with a lower generating capacity to the standards. The standards do not apply to older units built before that date that have not been modified, although some older units do meet the standards. In addition, under a program called New Source Review, older units must install modern pollution controls when they make “major modifications” that significantly increase their emissions. The level of control required depends on the air quality in the area where the unit is located—a unit in an area that does not meet federal air quality standards must install more stringent controls. Although older units are generally excluded from the new source standards, they are subject to the acid rain provisions of the Clean Air Act Amendments of 1990. The 1990 amendments directed EPA to reduce emissions of sulfur dioxide from electricity generating units by setting a limit, known as a “cap,” on emissions from all units and establishing an emissions trading program. Under the trading program, each unit received emissions “allowances” that represent the right to emit one ton of sulfur dioxide. The allowances may be bought, sold, or banked for use in later years, but generating unit owners or operators must own enough allowances at the end of each year to cover their annual emissions. Although the program did not start until 1995, some units affected by the program complied earlier, according to EPA, thereby reducing sulfur dioxide emissions by about 2.2 million tons between 1990 and the end of 1994. Between 1995 and the end of 2000, the affected units reduced their sulfur dioxide emissions by 2.5 million tons (from 13.7 million tons in 1994 to 11.2 million tons in 2000)—a decline of about 18 percent. EPA expects the program to result in further reductions in sulfur dioxide emissions between 2000 and 2010. To reduce emissions of nitrogen oxides, the acid rain provisions of the 1990 amendments limited the annual rate of emissions for individual units, rather than imposing an annual aggregate tonnage of emissions. To achieve emissions reductions while minimizing the burden on generators, the legislation allowed companies with multiple units to comply with the prescribed rate by averaging their emissions rates across two or more units and ensuring that the average did not exceed the prescribed rate. Thus, individual older units may continue to emit at levels above the prescribed annual emissions rate. Although the program started in 1996, some of the affected units complied earlier, according to EPA, thereby reducing emissions of nitrogen oxides by 700,000 tons between 1990 and the end of 1995. Between 1996 and the end of 2000, the affected units reduced their emissions of nitrogen oxides by 900,000 tons (from 6.0 million tons in 1995 to 5.1 million tons in 2000)—a decline of 15 percent. In 2000, older units emitted more sulfur dioxide and nitrogen oxides—and about the same amount of carbon dioxide—per unit of electricity produced than newer units. For each megawatt-hour of electricity generated, older units, in the aggregate, emitted about twice as much sulfur dioxide as newer units—12.7 pounds at older units, compared with 6.4 pounds at newer units. Older units also emitted about 25 percent more nitrogen oxides than newer units—4.7 pounds versus 3.8 pounds—for every megawatt-hour of electricity generated. Older and newer units both emitted about 1 ton of carbon dioxide for each megawatt-hour of electricity generated. (See fig. 2.) Overall, while generating 42 percent of the electricity, older units emitted 59 percent of the sulfur dioxide, 47 percent of the nitrogen oxides, and 42 percent of the carbon dioxide from fossil-fuel units. Units that began operating in 1972 or after were responsible for the remainder of the emissions and electricity production. Of the older units, those in the Mid-Atlantic, Midwest, and Southeast released most of the emissions, and in disproportionate quantities for the amount of electricity they produced. Specifically, older units in these regions accounted for 87 percent of the sulfur dioxide, 75 percent of the nitrogen oxides, and 70 percent of the carbon dioxide emitted from older units nationwide in 2000, while generating 67 percent of the electricity from all older units. (App. I presents, by state, data on older units’ electricity generation, emissions per megawatt-hour of electricity generated, and aggregate emissions.) Figures 3, 4, and 5 show the location of older units and the amount of sulfur dioxide, nitrogen oxides, and carbon dioxide they emitted in 2000. Older units that burned coal released a disproportionate share of emissions for the electricity they produced, compared with units burning natural gas and oil. Coal-burning units emitted 99 percent of the sulfur dioxide, 88 percent of the nitrogen oxides, and 85 percent of the carbon dioxide from older units nationwide, while generating 79 percent of the total electricity from older units. Older units generally do not have to meet the standards applicable to newer units, and in 2000, many of the older units emitted sulfur dioxide and nitrogen oxides at levels higher than what is permitted under the standards applicable to newer units for one or both of the pollutants. In that year, 36 percent of older units emitted sulfur dioxide at levels above the new source standard for that pollutant, and 73 percent emitted nitrogen oxides at levels above the new source standard. Approximately 31 percent of all older units emitted both pollutants at levels above the new source standards. As shown in figure 6, in 2000, 34 percent of the total sulfur dioxide emissions (2.13 million of 6.34 million tons) and 60 percent of the total nitrogen oxide emissions (1.41 million of 2.35 million tons) from older units were “additional” emissions—that is, emissions at levels above the standards applicable to newer units. The additional sulfur dioxide emissions represented 20 percent of the sulfur dioxide emissions from fossil-fuel units (older and newer), and the additional emissions of nitrogen oxides represented 28 percent of the emissions of nitrogen oxides from fossil-fuel units. Most of the additional emissions—91 percent of the sulfur dioxide and 78 percent of the nitrogen oxides—came from units located in the Mid-Atlantic, Midwest, and Southeast. Figures 7 and 8 show the level of additional emissions at older units in 2000. The majority of these emissions—99 percent of the sulfur dioxide and 91 percent of the nitrogen oxides—were from coal units, while other fossil fuel-burning units accounted for the remainder. As noted, the additional emissions shown in figure 6 represent the emissions by older units above the limits applicable to new sources. If the same older units had generated the same quantity of electricity in 2000 but had met the new source standards, total emissions would have been lowered by an amount equal to the computed additional emissions. However, a requirement that older units meet the standards could have reduced the quantity of electricity generated, raised the price of electricity, and/or shifted generation among units. Among other things, owners might have chosen to retire some older units rather than incur the costs of meeting the standards. According to a December 2000 Energy Information Administration study, requiring older coal units to install pollution control equipment would, by 2010, result in retirements that would reduce the nation’s coal-based electricity generating capacity by 7 percent more than is otherwise projected (and the total U.S. capacity from all fuels by 3 percent), based on 1999 capacity levels. The study projected that such a requirement would cause operators of coal units to spend $73 billion dollars to install pollution control equipment by 2020. The study also concluded that electricity prices in 2010 would be 4 percent higher with a requirement to install control equipment than they would be without one. If older units had been required to meet new source standards in 2000, to the extent practicable, other units might have increased their operations— for example, by running more hours each day—to meet the demand for electricity that would have otherwise been produced by the units that retired. Because it is not possible to determine exactly which units would have been retired or run more to meet the demand, it is not possible to quantify precisely what the emissions in 2000 would have been if all units had been required to meet the new source standards. In addition, generating units that increased production to meet the demand created by retirements could have purchased sulfur dioxide emissions allowances from the retired units. Thus, the net decrease in sulfur dioxide emissions would not have been as great as the level of additional emissions reported above. Similarly, it is difficult to predict precisely how such requirements would affect future emissions levels. Any new coal, natural gas, or oil units built to replace retired units would, at a minimum, have to meet the new source standards, which would reduce the emissions for each quantity of electricity generated. To meet the new source standards, older units would need to switch fuels, or add or upgrade pollution control equipment. Some older units already use pollution control equipment or have taken other actions to reduce their emissions of sulfur dioxide or nitrogen oxides. For example, we found that 681 older units met the sulfur dioxide standard by burning coal with low sulfur content. We also found that the use of emissions controls did not necessarily indicate that the units met the new source standards. For example, 399 older units with equipment to control their nitrogen oxide emissions still exceeded the emissions standard applicable to newer units. We provided EPA with a draft of this report for review and comment. We subsequently received comments from the Office of Air Quality Planning and Standards, and the Office of Atmospheric Programs. EPA generally agreed with the information presented. Both offices suggested technical changes to the report, which we have incorporated as appropriate. To respond to the first objective, we reviewed information from the Energy Information Administration and EPA on air emissions, electricity generation, and the age of electricity generating units. While both agencies maintain such information, the data we needed for this analysis were not readily available in a user-friendly format. For example, EPA has reliable and timely emissions data, but the 2000 data were not available with information on electricity generation and the age of each unit. Because of these limitations, we obtained alternative data from Platts/RDI, a private vendor that integrates EPA’s emissions data with the Energy Information Administration’s data on electricity generation and the age of generating units. Specifically, we obtained and analyzed air emissions and electricity generation data for each active fossil-fuel unit above 15 megawatts in generating capacity that started operating before 1972. For newer units, we obtained data on aggregate national emissions and electricity generation at units with a capacity above 15 megawatts. We chose 15 megawatts as the threshold capacity because units above that capacity accounted for almost all (about 99 percent) of the electricity generation from all fossil-fuel units in 2000. Because data on air emissions and the use of control equipment were available for only 1,157 of the 1,396 active units (83 percent), the data may not fully represent the total level of emissions and the number of units using control equipment. However, the units that did not report emissions data generated less than 1 percent of the electricity from older units and therefore are not likely to have produced large quantities of emissions. To respond to the second objective, we identified the applicable new source standard for each type of unit, as listed in the Code of Federal Regulations, Title 40, part 60. We then determined the difference between the actual rate of emissions at each unit, in pounds of pollutant per unit of fuel consumed, and the rate allowed under the standard that applies to newer units with the same capacity that burn the same fuel. We then multiplied the difference by the amount of fuel burned in 2000 to determine the annual level of “additional” emissions. In cases where EPA has not issued a standard for a particular type of unit, we excluded such units from our analysis of additional emissions. Regulations for some types of generating units were promulgated after 1971, but for purposes of this report we have not distinguished these units and have classified them as newer or older units based on their age. For example, EPA promulgated a regulation in 1978 requiring certain electric utility steam-generating units to meet new source standards. However, if one of these units was constructed after August 17, 1971, but before September 18, 1978, we classified it as a newer unit even though it would not have to meet the new source standard. We did not attempt to estimate the costs or benefits of requiring older units to meet the new source standards. Therefore our analysis does not allow us to comment on the economic or energy security implications of requiring older units to meet the standards. We conducted our work between October 2001 and May 2002 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chairman and Ranking Minority Member of the House Committee on Energy and Commerce and its Subcommittee on Energy and Air Quality; the House Committee on Government Reform and its Subcommittee on Energy Policy, Natural Resources, and Regulatory Affairs; the Ranking Minority Member of the Senate Committee on Environment and Public Works, and its Subcommittee on Clean Air, Wetlands, and Climate Change; other interested members of Congress; the Administrator, EPA; the Secretary of Energy; the Director of the Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-3841. Key contributors to this report are listed in appendix II. Table 1 presents, by state, data on older units’ electricity generation; emissions of sulfur dioxide, nitrogen oxides, and carbon dioxide; and aggregate emissions of these substances. In addition to the individuals named above, Michael Hix, Chase Huntley, Vincent Price, and Laura Yannayon made key contributions to this report. Important contributions were also made by Cynthia Norris, Frank Rusco, and Amy Webbink.
Although fossil fuels--coal, natural gas, and oil--account for more than two thirds of the nation's electricity, generating units that burn these fuels are major sources of airborne emissions that pose health and environmental risks. To limit emissions and protect air quality, the Environmental Protection Agency regulates emissions of sulfur dioxide and nitrogen oxides from a variety of sources including electricity generating units that burn fossil fuels, other industrial sources, and automobiles. Older electricity generating units--those that began operating before 1972--emit 59 percent of the sulfur dioxide, 47 percent of the nitrogen oxides, and 42 percent of all electricity produced by fossil-fuel units. Units that began operating in or after 1972 are responsible for the remainder of the emissions and electricity production. For equal quantities of electricity generated, older units, in the aggregate, emitted twice as much sulfur dioxide and 25 percent more nitrogen oxides than newer units which must meet the new source standards for these substances. Older and newer units emitted about the same amount of carbon dioxide for equal quantities of electricity generated. Of the older units, those in the Mid-Atlantic, Midwest, and Southeast produced the majority of the emissions, and in disproportionate quantities for the amount of electricity they generated compared with units located in other parts of the country. Older units that burned coal released a disproportionate share of emissions for the electricity they produced compared with units burning natural gas and oil. Thirty-six percent of older units, in 2000, emitted sulfur dioxide at levels above the new source standards applicable to newer units, and 73 percent emitted nitrogen oxides at levels above the standards. These "additional" emissions--those above the standards for newer units--accounted for 34 percent of the sulfur dioxide and 60 percent of the nitrogen oxides produced by older units. Coal-burning units emitted 99 percent of the additional sulfur dioxide and 91 percent of the additional nitrogen oxides, while other fossil fuel-burning units accounted for the remainder.
The electricity industry is the largest industry in the United States. According to the Department of Energy’s (DOE) Energy Information Administration (EIA), the industry had total assets worth about $700 billion in 1993 and has revenues of about $200 billion annually. Nuclear power plants have provided about 20 percent of the nation’s electricity in recent years. Most nuclear power plants are owned and operated by investor-owned utilities. Investor-owned utilities comprise only about 8 percent of the nation’s 3,200 electric utilities but generate and sell over 75 percent of the electricity. One such utility—the Commonwealth Edison Company—received the former Atomic Energy Commission’s first license to operate a civilian nuclear power plant almost 40 years ago. Since then, the Atomic Energy Commission and its successor regulatory agency—the Nuclear Regulatory Commission (NRC)—have issued operating licenses for a total of 125 plants. Twenty-one of the plants licensed to operate have been permanently retired, leaving 104 with operating licenses. The Atomic Energy Act of 1954, as amended, and the Energy Reorganization Act of 1974, as amended, require NRC to, among other things, protect the radiological health and safety of the public. Under this mandate, NRC licenses nuclear power plants to operate for up to 40 years and continually regulates the utility-licensees’ operation of these plants. In addition, NRC permits utilities to seek license extensions of up to 20 years. Decommissioning a nuclear power plant involves dismantling the structures and equipment at the plant, properly disposing of the resulting radioactive and other wastes, and then ensuring that the plant site complies with applicable environmental standards. Decommissioning involves a combination of technical, financial, and regulatory challenges. For example, the nuclear reactor vessel, other plant components, and concrete surfaces of various rooms in the plant are radioactive or contaminated with radioactive material. Therefore, the processes of maintaining the plant in a safe condition prior to dismantling it and disposing of the resulting radioactive wastes requires constant attention to protecting workers and the public from exposure to radiation. The interval of time between the initial operation of a plant and its eventual dismantling also presents challenges to licensees and NRC. This interval can be as short as a few years if a plant is retired earlier than expected and dismantled shortly thereafter or as long as 40 to 60 years if a plant operates for an extended license period. In lieu of dismantling a plant immediately after its retirement, a utility may instead elect to decommission a plant by placing the plant in safe storage before dismantling it, as long as the entire decommissioning process is completed within 60 years. This feature of NRC’s regulation allows utilities to defer dismantling a retired plant if they (1) are awaiting the retirement of a colocated plant, (2) need to give DOE time to remove all of the spent (used) fuel from the plant, and (3) need to allow the radioactivity in the plant to decay before dismantling the plant, among other things. Finally, the financial aspects of decommissioning also present challenges to utility-licensees. For example, although actual decommissioning experience is limited, decommissioning a single plant is expected to cost hundreds of millions of dollars. NRC does not have the authority to regulate the manner in which licensees recover from their customers the costs of constructing, operating, and decommissioning nuclear power plants. Most licensees are investor-owned utilities that traditionally have been provided a monopoly within their service areas. In return, these utilities built generating plants, including nuclear, coal, gas, and hydro power plants, and transmission and distribution facilities to provide electricity for all of the existing and future customers within their service areas. Under this traditional “cost-of-service” regulation, state public utility commissions approved electricity rates that reflected the utilities’ costs of building and operating their electricity systems and approved the financial returns on these investments. Similarly, the interstate aspects of the electric utility industry, including financial transactions, wholesale rates, and interconnection and transmission arrangements, are regulated by the Federal Energy Regulatory Commission (FERC). In this context, utilities’ proposed arrangements to finance the decommissioning of their nuclear plants are a part of their financial operations that are subject to review and approval by their respective state public utility commissions and FERC. NRC’s authority to require utilities to accumulate funds to decommission their nuclear power plants is derived from its responsibilities under the Atomic Energy Act of 1954, as amended, to regulate the safety of nuclear power. Until 1988, NRC required licensees to certify that sufficient financial resources would be available when needed to decommission their nuclear power plants but did not require these licensees to make specific financial provisions for decommissioning. On July 26, 1988, NRC’s original regulations on the technical and financial aspects of decommissioning became effective. By then, NRC had licensed 114 plants to operate. NRC’s 1988 regulations provided utilities with the following options for providing decomissioning financial assurance: The prepayment of cash or liquid assets into an account segregated from the licensee’s assets and outside the licensee’s administrative control. Prepayment may be made in the form of a trust, escrow account, government fund, certificate of deposit, or deposit of government securities. External sinking funds. These types of funds are established and maintained through the periodic setting aside of funds in an account segregated from the licensee’s assets and outside the licensee’s administrative control. An external sinking fund may be in the same forms permitted for prepayment. A surety method or insurance. A surety method may be in the form of a surety bond, letter of credit, or line of credit payable to a trust established for decommissioning costs. For “federal licensees,” such as the Tennessee Valley Authority, a statement of intent that decommissioning funds will be obtained when necessary. NRC recognized both the uncertainty over decommissioning costs and the authority of public utility commissions and FERC to regulate the economic affairs of utilities. Therefore, NRC approached the regulation of the financial aspects of decommissioning by requiring utilities to provide “reasonable assurance” that sufficient funds would be available to decommission their nuclear power plants when the plants are permanently shut down. Among other things, NRC required, by July 27, 1990, each holder of an operating license to (1) certify that the licensee would provide the required financial assurance for decommissioning; (2) calculate, using a formula contained in NRC’s regulations, the minimum amount (expressed in current-year dollars) that utilities would accumulate for decommissioning their plants by the time they expect to retire them; and (3) provide a copy of the financial instrument(s) executed to provide the required financial assurance. Essentially all utilities have elected the option of establishing external sinking funds to finance future decommissioning costs. A portion of the charge that utilities’ customers pay for their electricity is earmarked for deposit in these funds, and the funds are invested to earn income. In its regulations, NRC deferred to utilities and their rate regulators the details of collecting the required decommissioning funds. NRC requires only that the amount actually accumulated by the end of a plant’s operating life equals the projected cost to decommission the plant. About 5 years before the projected end of plant operations, NRC requires a utility to submit a preliminary decommissioning cost estimate that includes an up-to-date assessment of the major factors that could affect the cost to decommission its plant. Also, if necessary, the cost estimate shall include plans for adjusting needed funds for decommissioning to demonstrate that a reasonable level of assurance will be provided so that funds will be available when needed to cover the cost of decommissioning. Finally, not later than 2 years after a plant has been permanently shut down, the utility must submit to NRC a decommissioning report that includes, among other things, a site-specific decommissioning cost estimate. After about 10 years of experience with NRC’s 1988 decommissioning regulations, the electricity industry has begun to change in ways that have prompted NRC to reassess the adequacy of its regulations governing nuclear power plants, including financial assurances for decommissioning retired plants. Over the next 10 years or so, many states are expected to replace their traditional systems of economic regulation of monopolistic electric utilities with more-competitive, less-regulated environments mainly for the generation of electricity but, to a lesser degree, for the transmission and distribution of electricity as well. Competition, according to NRC, could result in economic pressures that will affect the availability of adequate funds for decommissioning and how utilities address maintenance and safety in nuclear power plant operations. Currently, the Congress is considering a number of bills to restructure the retail electricity industry to promote a more efficient and market-driven industry. Also, as of September 1997, 49 states had considered reforming their retail electricity markets. As of June 1, 1998, FERC and at least 18 states had either enacted legislation or issued comprehensive regulatory orders implementing plans to restructure the industry. In California, for example, a plan to produce competitive electricity markets and allow consumers to choose their electricity supplier went into effect in March 1998. Also, some of these initiatives would encourage or require the restructuring of the affected electricity industry. Specifically, utilities that have traditionally generated, transmitted, and distributed electricity would be encouraged or required to separate the operation of electricity generation systems from the operation of transmission and distribution systems. Concerned about the potential costs to decommission nuclear plants and the implications of a competitive electricity environment on the ability of plant owners to finance decommissioning projects, the congressional requesters of this report asked us to determine if (1) there is adequate assurance that NRC’s licensees are accumulating enough funds to decommission their nuclear power plants when the plants are retired and (2) NRC is adequately addressing the effects of electricity deregulation on the funds that will eventually be needed for decommissioning. To address both of our objectives we met with, and obtained documentation from, officials of the following organizations: NRC, Rockville, Maryland. Nuclear Energy Institute, Washington, D.C. (The Institute represents the nuclear industry, including utilities that operate nuclear power plants.) National Association of Regulatory Utility Commissioners. (The Association represents public utility commissions and other state-level rate-setting entities.) National Nuclear Safety Network (a public interest organization). public utility commissions of Oregon (Salem), Maryland (Baltimore), and New Hampshire (Concord). Portland General Electric (Portland, Oregon); Commonwealth Edison (Chicago, Ill.); Office of Consumer Advocate (Concord, NH.); and Moody’s Investors Service (New York, N.Y.). To address the adequacy of assurance that NRC’s licensees are accumulating enough decommissioning funds, we also met with, and obtained documentation from, TLG Services, Inc., which prepares decommissioning cost estimates for owners/licensees of nuclear power plants and Dr. Bruce Biewald, a consultant to groups that participate in state public proceedings on setting electricity rates, including charges for decommissioning. We also analyzed whether licensees or their parent companies have (1) accumulated decommissioning funds at a rate consistent with the percentages of their reactors’ operating life already used up (i.e., the fund for each reactor should equal this percentage times the present value of its future decommissioning cost) and are (2) currently (viz., 1997) adding enough money to their decommissioning funds (i.e., assuming that contributions in future years will increase at the funds’ after-tax rate of return) to accumulate sufficient funds to decommission their plants when they are retired. The scope and methodology that we used in these two analyses are discussed in appendix I. To address whether NRC is adequately considering the effects of electricity deregulation on the funds that will eventually be needed for decommissioning, we also obtained and reviewed public comments on NRC’s advance notice of proposed rulemaking for decommissioning financial assurances and on the subsequent proposed rule. We conducted our review from October 1997 through March 1999 in accordance with generally accepted government auditing standards. We analyzed the status of decommissioning funding as of December 31, 1997, (the year of the most recent data available) for 76 licensees that own all or part of 118 operating and retired nuclear power plants. We performed this analysis because NRC had not, for its own regulatory purposes, systematically collected and analyzed information on its licensees’ decommissioning funds. Our analysis showed that, under likely assumptions about future rates of cost escalation, net earnings on the investments of funds, and other factors, 36 of the licensees had not accumulated funds at a rate that is sufficient for eventual decommissioning. Under these conditions, these licensees will have to increase the rates at which they accumulate funds to meet their future decommissioning financial obligations. Under more pessimistic (unfavorable) and more optimistic (favorable) assumptions, 72 and 8 licensees, respectively, had not accumulated funds at a sufficient rate. We also analyzed whether licensees had recently increased the amount of funds that they had collected to make up for under-collections in earlier years. For this analysis, we compared the amounts collected in 1997 with the annual average of the present value of the amount of funds needed to meet licensees’ funding obligations when their plants’ licenses expire. We found that, under likely assumptions, 17 companies collected less funds in 1997 than they need to collect each year over their plants’ remaining operating life. The 17 companies included 15 companies that had not collected sufficient funds through 1997. Under more pessimistic and optimistic assumptions, 66 and 4 licensees, respectively, need to increase the amount of funds that they collect in future years. Our funding analysis generally assumes that nuclear power plants would operate for their current licensed operating period—usually 40 years—and that the licensees will remain financially solvent. No plant, however, has yet operated for the full period of its operating license, and electricity deregulation is expected to cause or contribute to more premature plant retirements. Furthermore, 19 of 26 plants that one Wall Street firm considers at risk for early retirement are owned, in whole or in part, by companies that have been slow to accumulate funds to decommission their plants. So far, however, neither early plant retirements nor licensee bankruptcies have adversely affected decommissioning. Economic regulators have allowed utilities to charge their customers rates that included amounts for decommissioning plants that were retired early, and courts have permitted the continued accumulation of decommissioning funds during bankruptcy proceedings. From 1990 through 1997, most licensees’ estimates of the costs to decommission their plants have increased rapidly. Likewise, the utilities’ periodic calculations, using a formula contained in NRC’s regulations, of the minimum amount that they must accumulate in their decommissioning funds generally have been escalating more rapidly (particularly in recent years) than the site-specific cost estimates. Also, there are uncertainties over what the actual decommissioning costs might be. For example, the eventual resolution of a protracted dispute between NRC and the Environmental Protection Agency (EPA) over appropriate radiation standards for decommissioned sites could affect final decommissioning costs. NRC requires licensees using external sinking funds for decommissioning financial assurance to deposit funds collected for decommissioning into their funds each year. For two reasons, however, NRC does not know if licensees are accumulating decommissioning funds at rates that will provide enough funds to decommission their plants when the plants have been retired. First, NRC leaves the amounts to be put aside up to licensees and their public utility commissions. Second, until recently, NRC has not required that licensees report on the status of their decommissioning funds. We analyzed the status of decommissioning funds, as of the end of 1997, for 118 operating and retired nuclear plants owned by 76 licensees (or the parent companies of subsidiaries that are the legal owners of the plants). In our first analysis, we compared the total amount of each licensee’s decommissioning funds with the expected amount of funds that should have been accumulated by that date. To determine the expected amount, we assumed that licensees would accumulate increasing (but constant present-value) amounts annually. Once in the fund, each yearly contribution would continue to grow at the fund’s after-tax rate of return. The sum of these annual amounts, plus the income earned on the investments of the funds, would equal the total estimated decommissioning costs when the licensees’ plants’ operating license expires. For example, at the end of 1997, a licensee’s decommissioning fund for a plant that had operated half of a 40-year license period (begun in 1977) should equal one-half of the present value of the estimated cost to decommission the plant beginning after 2017. This expected level of funding is not the only funding stream that could accrue to equal future decommissioning costs but provides us with both a common standard for comparisons among licensees and, from an equity perspective among ratepayers in different years, a financially reasonable growing current-dollar funding stream over time. Appendix I describes our methodology, assumptions, and results for each of the 76 licensees. Performing this analysis required that we make assumptions about future economic and plant-operating conditions. Key assumptions included initial decommissioning cost estimates, rates of cost escalation, net earnings on the investments of funds (discount rate), plant-operating periods, and the use of decommissioning funds for both radiation- and non-radiation-related decommissioning activities. Because of the inherent uncertainty associated with assuming future conditions over many years, we used assumptions of the most likely future conditions to develop a baseline scenario. And, to bound the results of the baseline scenario, we developed pessimistic and optimistic scenarios using unfavorable and favorable economic and plant-operating conditions, respectively. For our baseline scenario, 36 of the 76 licensees (47 percent) had not accumulated funds at a rate that is sufficient for eventual decommissioning. Under these conditions, these licensees will have to increase the rates at which they accumulate funds to meet their future decommissioning financial obligations. Changing assumptions to reflect the pessimistic and optimistic scenarios, greatly affects the adequacy of the licensees’ funding. Under pessimistic and optimistic assumptions, 72 (95 percent) and 8 (11 percent) licensees, respectively, had not accumulated funds at a sufficient rate for eventual decommissioning. The fact that a licensee might have collected funds for decommissioning at a lesser rate than the expected rate does not, by itself, mean that the licensee will not meet its financial obligations by the time it retires its plants. By increasing their rates of collection, these licensees can still accumulate the funds that are necessary. Therefore, to obtain insights on whether licensees are now collecting funds at adequate rates, we undertook a second analysis. We compared the available amounts that each licensee collected in 1997 with the average yearly present value of the amounts that the licensees would have to accumulate each year over the remaining life of their plants to have enough decommissioning funds upon the retirement of the plants. This analysis assumes that the licensees will increase their yearly future funding at the after-tax rate of return on the investments of their funds. And, once in the fund, these yearly contributions will grow at this same rate. Our analysis shows these results for the baseline (most likely), pessimistic, and optimistic scenarios. For the baseline, the results show that only 17 of 76 licensees (22 percent) were not yet collecting the amounts that they will need to meet their decommissioning obligations. Thus, while 47 percent of the licensees had less than expected levels of funds at the end of 1997, only 22 percent did not appear to be currently on track, as represented by the funds that they collected in 1997, to eventually meet their decommissioning financial obligations. In other words, while licensees might not have funded sufficiently in the early years of their plants’ operating life, our results suggest that most licensees have recently increased funding to make up the funding shortfalls from earlier years. But if conditions deteriorate from those assumed in our baseline scenario, as represented by the pessimistic scenario, 66 licensees (87 percent) under-collected funds in 1997. Conversely, under the optimistic scenario, only 4 licensees (5 percent) are currently accumulating funds too slowly. If a nuclear power plant is retired prematurely, sufficient funds may not have been collected by the retirement date to pay all decommissioning costs. To date, 21 plants have been retired before their licenses expired. So far, however, public utility commissions have permitted licensees to continue collecting the funds for decommissioning from the licensees’ electricity customers after these plants were retired. To date, no plant has operated for its full licensed operating life, and 21 plants have been retired before their operating license would have expired. (See table 2.1.) Two of the 20 plants operated for as long as 25 years. Fifty-two of the 104 plants that are currently licensed to operate have operated from 20 to 30 years. Nine commercial nuclear power plants were permanently shut down before NRC issued its original decommissioning regulations. Eight of these retired plants are in safe storage. The ninth plant (Pathfinder), which was a small demonstration plant, has been decommissioned. Twelve commercial nuclear power plants have been retired since NRC issued its decommissioning financial assurance regulations. Four of these plants are in safe storage. Two plants—Fort St. Vrain and Shoreham—have been decommissioned. Five plants are currently being dismantled, and the owner of one plant has not yet decided whether to dismantle the plant soon or put it in safe storage. The five plants that are now being dismantled—Big Rock Point, Haddam Neck, Maine Yankee, Trojan, and Yankee Rowe—were retired before their owners had accumulated sufficient funds to decommission them. For example, the Trojan plant was retired in 1992 after 17 years of operation. At that time, the plant’s licensees estimated that decommissioning the plant would cost $198 million (in 1993 dollars). However, the licensees had accumulated only $43 million, or 22 percent, of that amount. The Maine Yankee plant was permanently shut down in 1997 after 24 years of operation. When the plant was retired, the licensee had accumulated $188 million for decommissioning. That amount was only 53 percent of the $357 million (in 1997 dollars) that the licensee estimated would be needed to decommission the plant. In both of these cases, as well as in other states where retired nuclear plants are located, public utility commissions are permitting the licensees to continue collecting decommissioning funds from their customers even if their plants were retired early. Industry experts, such as major financial institutions, and DOE’s Energy Information Administration anticipate that the deregulation and restructuring of the electricity industry could result in the early retirement of from 9 to 40 percent of the nation’s nuclear power plants because these plants may not be competitive with other sources of electricity. In April 1998, Standard & Poor’s predicted that poor economics would cause the early retirements of six plants by 2001. (See table 2.2.) The company also concluded that another 20 units are “at risk” through 2020 for early retirement on the basis of expected poor operating and economic performance over the remainder of the plants’ license. According to the company, in a competitive market, plant owners will attempt to improve profitability; however, the vulnerability of these plants to unscheduled outages may squeeze operating margins and cause the plants to lose their long-term value. In commenting on our report, NRC pointed out that one plant that Standard & Poor’s listed as “at risk” for premature retirement—Pilgrim—is in the process of being sold. The prospective buyer, NRC added, intends to operate the plant for its full license term and will consider seeking a license extension for the plant. This example, NRC said, serves to illustrate both the speculative and controversial nature of projecting the premature retirements of nuclear power plants. Other experts, however, have reached conclusions that are similar to Standard & Poor’s. For example, in January 1999, Synapse Energy Consultants, Inc., a firm that often testifies in electricity rate proceedings conducted by state public utility commissions, concluded in a report that, depending upon the assumptions used, from 20 to 90 nuclear power plants may be retired early. The most likely case, according to the authors of the report, is that 34 plants will be retired early. Nineteen of the 26 plants that Standard & Poor’s predicts may be retired early are also included in Synapse’s list of 34 plants that it believes may be retired early. Compounding the risk that more nuclear power plants may be retired prematurely is the possibility that the licensees that own these plants may have, so far, under-accumulated funds to decommission these plants. For example, 19 of the 26 plants that may be retired early, according to Standard and Poor’s predictions, are owned, in whole or in part, by 14 licensees that have not accumulated sufficient decommissioning funds, according to our analysis. Additional predictions of more early plant retirements have also been made. For example, in December 1997, EIA projected that 24 nuclear plants would retire as early as 10 years before their license expires. In 1995, Moody’s concluded that at least 10 nuclear plants may be closed for economic reasons if the generation of electric power is completely deregulated. One year later, Moody’s downgraded the bond ratings of 24 electric utilities that operate nuclear plants. Again, in 1997, Moody’s said that the frequency that certain nuclear plants tend to require expensive capital additions to comply with their operating license increases the likelihood of even more early plant retirements. The premature retirement of the Zion-1 and Zion-2 nuclear power plants in January 1998 illustrates the effect of deregulation on power plant economics. The Commonwealth Edison Company determined that the plants could not generate electricity at competitive prices in the deregulated environment. Therefore, the utility decided to retire both plants after about 24 years, or 60 percent, of their licensed operating life. When the plants were permanently shut down, the utility had put aside $362 million, or less than 43 percent of the $834 million estimated to be needed to decommission the two units. According to officials of Commonwealth Edison, however, under Illinois law the utility is authorized and directed to include in the rates that it charges its electricity customers amounts for the necessary and prudent decommissioning costs for these plants. In addition to early plant retirements, licensees of nuclear power plants have declared bankruptcy in a few cases. So far, the continuing availability of decommissioning funding has been protected in these cases. For example, the Cajun Electric Cooperative owned 30 percent of the River Bend, Louisiana, plant. The Cooperative went bankrupt in 1994, and a bankruptcy settlement was approved on August 26, 1996. The settlement provided for the transfer of $125 million to an external trust to satisfy Cajun’s share of River Bend’s estimated decommissioning cost of $419 million (in 1996 dollars). But the settlement left the successor to Cajun’s share of the plant open. The court order provided that the bankruptcy trustee and parties to the settlement were to take all necessary and appropriate actions to consummate the settlement by June 1, 1997, including finding a buyer for Cajun’s share of River Bend. On November 28, 1997, NRC’s staff approved the transfer of Cajun’s portion of River Bend’s license to Entergy Gulf States, Inc., which is now the sole owner of this plant. NRC’s staff concluded that Entergy Gulf States was financially qualified to contribute appropriately to the plant’s decommissioning. Another bankruptcy case involved the El Paso Electric Company, which owns 16 percent of the three-unit Palo Verde Nuclear Generating Station in Arizona. The company filed for bankruptcy protection in 1992, primarily because of excess generating capacity and insufficient rates to cover the costs of power. The settlement of the bankruptcy filing became effective in 1996, at which time, the company emerged with reduced debt and a stronger financial position. During the bankruptcy proceeding, according to an NRC official, the company continued to make its required decommissioning payments. For our funding analyses, we assumed, among other things, that current estimates of decommissioning costs are accurate. Because actual decommissioning experience is limited, however, actual costs could be lower or higher. From 1990 through 1997, cost estimates increased rapidly for both site-specific studies by licensees and calculations using NRC’s cost-estimating formula. Moreover, uncertainties about the actual scope of decommissioning affects costs. Utilities, for example, sometimes consider the cost to empty a spent fuel storage pool (to permit dismantling a retired plant) as a decommissioning cost. NRC, however, excludes the cost of emptying the storage pool from the scope of its formula for estimating decommissioning costs. The storage of spent fuel in facilities outside of the plant’s storage pool, and the cost of such storage, are addressed in parts of NRC’s regulations that are not directly related to decommissioning. In addition, the eventual resolution of a protracted dispute between NRC and EPA over appropriate radiation standards for decommissioned sites could affect the scope of decommissioning and, therefore, total decommissioning costs. Cost estimates since 1990, developed through both NRC’s formula and licensees’ site-specific cost estimates, show that both estimates have increased. Although NRC has not routinely monitored the amounts of decommissioning funds that its licensees have been accumulating, its 1988 regulations required licensees to annually calculate the minimum amount of funds that must be accumulated to pay future decommissioning costs. For each plant using NRC’s mathematical formula, the utility must make an initial calculation in 1986 dollars that is based on the size and type of plant. Then, the utility must escalate the initial calculated value to that of the current year on the basis of prescribed escalation factors. Also, to support proposed charges to electricity customers, plant owners periodically develop detailed estimates of the cost to decommission their specific plants and submit the estimates to their public utility commission regulators. In the absence of significant actual experience, site-specific estimates of decommissioning costs provide the best check on the reasonableness of NRC’s formula for calculating potential decommissioning costs. Since 1990, decommissioning cost estimates prepared on a site-specific basis and calculated using through NRC’s formula have increased substantially. For example, site-specific cost estimates (excluding costs that licensees may incur during decommissioning, such as spent fuel storage costs, that NRC does not consider to be decommissioning costs) have increased, on average, at a rate of about 6.6 percent per year. One reason for this increase is the expansion of the scope of decommissioning. The estimates made through NRC’s formula are now, on the average, about one-third higher than the site-specific estimates for the same plants. The main reason for this condition is that the waste disposal part of NRC’s formula was not designed to reflect licensees’ efforts to reduce the volume of waste from decommissioning in response to increasing prices for disposal that have traditionally been based on waste volume. In December 1998, NRC corrected this weakness, which brought calculations through its formula more in line with licensees’ site-specific cost estimates. Largely because DOE is not taking spent fuel from licensees’ nuclear power plants, licensees that intend to immediately dismantle their retired plants must store their spent fuel outside of their plants. For the purpose of estimating and accounting for decommissioning costs, some licensees treat storage costs related to the retirement of their plants as decommissioning costs. The inclusion by licensees of these storage costs in their decommissioning costs is a major reason why licensees’ cost estimates have increased in recent years. A second reason is that licensees may include the cost to dismantle nonradioactive structures, such as administrative buildings, in their estimates of decommissioning costs. In contrast, NRC excludes both spent fuel management costs and non-radioactive-related cleanup costs from its formula for calculating the funds that licensees must accumulate to decommission their nuclear power plants. NRC’s reasons for excluding these types of costs are that it (1) regulates independent spent fuel storage facilities (facilities that are separate from the spent fuel pool, which is an integral part of a nuclear power plant) under regulations that are separate from those applicable to the construction, operation, and decommissioning of nuclear power plants and (2) only regulates the possession, use, and disposal of radioactive materials. Nevertheless, spent fuel management costs have been and will continue to be a real cost for utilities that choose to immediately dismantle their retired plants. For example, in 1995 the licensee for the retired Trojan plant in Oregon estimated that spent fuel management costs to construct, operate, and maintain a dry storage facility at that plant would cost about $102 million (in 1993 dollars). Uncertainty over the standards for residual radiation that utilities will have to meet in cleaning up the sites of their retired nuclear power plants affects the accuracy of the current estimates of future decommissioning costs. EPA is responsible for setting acceptable radiation limits outside of the boundaries of nuclear facilities and for developing residual radiation standards to protect the health and safety of the public and to protect the environment. EPA has been responsible since 1970 for establishing radiation standards for all aspects of decommissioning, including acceptable levels of residual contamination. To date, however, EPA has not issued such standards. “A site will be considered acceptable for unrestricted use if the residual radioactivity that is distinguishable from background radiation does not exceed 25 [millirem ] per year, including that from groundwater sources of drinking water, and that the residual radioactivity has been reduced to levels that are as low as reasonably achievable.” EPA does not agree with NRC’s standard. In fact, the disagreement between the two agencies has been characterized by both its length and its acrimony. EPA started to develop residual radiation standards in 1984 but has not yet finalized these standards. Nevertheless, EPA’s position is that NRC’s licensees should be required to decontaminate nuclear plant sites to a residual radioactivity level of 15 millirems per year and to limit the exposure to an individual from his/her consumption of groundwater to 4 millirems per year. Most recently, EPA’s administrator stated that the agency would apply the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 to sites that are being decommissioned if NRC and EPA do not reach an agreement on applicable standards. Also, in April 1998, one of NRC’s commissioners publicly commented that the impasse between EPA and NRC over appropriate radiation protection standards may have to be resolved by the Congress. In fact, to resolve this disagreement, NRC has sought legislation that would eliminate the overlap in the standard-setting authority of NRC and EPA. Currently, NRC’s licensees are using NRC’s regulations and related guidance on decommissioning the sites of retired nuclear facilities to plan and/or implement the decommissioning of their nuclear power plants and related nuclear fuel facilities. If, however, EPA’s residual radiation standards are ultimately used in lieu of NRC’s standards, licensees may have to perform additional cleanup when decommissioning their nuclear plant sites. If this occurred, it would increase decommissioning costs, but by how much is uncertain. According to both NRC and EPA officials, retroactively applying more stringent EPA standards to nuclear plant sites that have already been decommissioned according to NRC’s standards could be very costly. Late in 1998, NRC amended its decommissioning regulations in anticipation of the deregulation and restructuring of the electricity industry. The amended regulations do not allow licensees to rely exclusively on their external sinking funds to ensure that funds are available for decommissioning if its regulators no longer guarantee that moneys can be collected from the licensees’ customers through electricity rates. In such a case, NRC now requires a licensee to provide additional financial assurance for the portion of the licensee’s estimated decommissioning cost that would not be guaranteed. There is, however, uncertainty over the availability and affordability of some of these additional options for providing financial assurance. NRC will also now require licensees to periodically report financial information on decommissioning; however, NRC did not specify how it would use this information. Effective November 23, 1998, NRC amended its decommissioning financial assurance regulations out of concern that the deregulation and restructuring of the electricity industry could reduce confidence that the owners of nuclear power plants will be able to accumulate sufficient funds to decommission their plants. The new regulations provide that, to the extent that the collection of estimated decommissioning costs from customers is no longer guaranteed, a licensee may not exclusively rely on external sinking funds to provide adequate financial assurance of decommissioning. For any portions of decommissioning costs for which the collection of funds is not guaranteed, licensees will have to provide one or more additional types of financial assurance. Electric utilities have almost exclusively relied on the collection of fees from their electricity customers, deposited into externally managed sinking funds, to provide decommissioning financial assurance. In anticipation of electricity deregulation initiatives, NRC, in September 1998, amended its regulations (effective in Nov. 1998) to address situations in which a licensee’s continued collection of decommissioning fees from its electricity customers may no longer be guaranteed by the economic regulation of electricity rates. To the extent that the collection of decommissioning funds is no longer guaranteed, a licensee may provide up-front financial assurance. The options available to licensees include the prepayment of the estimated decommissioning cost or purchase of surety bonds or insurance to cover decommissioning costs. The assurances may also be in the form of guarantees of payments by the licensees or, as appropriate, their parent company, provided that such guarantees are accompanied by the passing of specified financial tests. Both NRC and the nuclear industry have expressed concern about whether these up-front payment methods would be affordable for licensees. However, in commenting on our draft report, NRC stated that the terms for the recent sales of the Three Mile Island Unit 1, Pilgrim, and Seabrook (partial sale) nuclear power plants have included the prepayment of all estimated decommissioning costs. NRC added that it believes that the prepayment option will likely be the preferred means of assuring decommissioning funds in future sales transactions. When NRC published its proposed amended regulations for public comment in September 1997, it expressed concern that surety instruments and insurance may not be available to some nuclear power plant licensees; therefore, NRC specifically asked for comments on this issue. In response, some commenters said they were concerned about the feasibility of the up-front methods (prepayment, surety instruments, and insurance) for assuring decommissioning funding. For example, the Edison Electric Institute, which represents electric utilities, stated that it could be difficult, if not impossible, for licensees to provide such assurances. Also, seven licensees jointly stated that these funding methods would bar prospective new owners from purchasing interests in nuclear power plants. The seven utilities added that the (then) proposed regulations could impose a financial burden that would likely prevent the sale of a nuclear plant. Finally, the utilities stated that (1) it is uncertain if an insurance product or a surety bond could be procured to secure a nonelectric utility’s share of decommissioning costs, and (2) the cost of procuring such a bond could potentially exceed the cost of prepaying decommissioning expenses. The difficulty in obtaining a surety bond or insurance product is illustrated by the experience of one of NRC’s licensees. Great Bay Power Corporation, which owned 12 percent of the Seabrook nuclear power plant in New Hampshire, was formed out of bankruptcy proceedings involving four former part-owners of the Seabrook plant. NRC concluded that Great Bay, as a part owner of the plant, did not appear to meet the definition of an “electric utility” because its ability to collect funds for decommissioning from its electricity customers was not guaranteed by the traditional regulation of electricity prices. Therefore, according to NRC’s regulation, Great Bay could not rely exclusively on external sinking funds to provide decommissioning financial assurance. Although NRC gave Great Bay until July 1998 to obtain a surety bond or other financial guarantee to fulfill its decommissioning obligations, the company was unable to obtain such a guarantee. Out of concern for the possible bankruptcy of Great Bay if NRC were to mandate that the company prepay its decommissioning obligation, the state of New Hampshire, in June 1998, passed legislation that would make the co-owners of Seabrook proportionately responsible for making Great Bay’s decommissioning payments if the company defaults on this obligation. According to NRC, this approach qualifies as an acceptable “other method” of providing decommissioning financial assurance. In addition to the traditional financial assurance methods discussed above, NRC adopted other methods that licensees may use to provide decommissioning financial assurance. Other guarantee methods, including parent company guarantees and self-guarantees coupled with financial tests. For parent company guarantees, a licensee’s parent company must, among other things, have net working capital, tangible net worth, and assets located in the United States worth at least six times the amount of decommissioning funds being assured by the parent company for all of its nuclear power plants. Tangible net worth must exclude the net book value of the nuclear unit(s). For self-guarantees, tangible net worth and assets located in the United States must be 10 times the amount of the decommissioning funds being assured. Contractual obligations of a licensee’s customers to purchase enough electricity to provide the licensee’s total share of uncollected funds for decommissioning. Any other method, or combination of methods, that provides, as determined by NRC upon its evaluation of the specific circumstances, assurance of decommissioning funding equivalent to that provided by the other acceptable methods. These methods are similar to financial assurance methods that NRC permitted in its 1988 decommissioning regulations for other types of licensees, such as operators of nuclear fuel facilities. Prior to November 1998, NRC had reserved the right to inspect licensees’ decommissioning fund arrangements and status. Under the 1998 amendments, NRC also explicitly reserved the right to take additional action, either independently or in cooperation with economic regulators. These actions could include modifying a licensee’s schedule for accumulating additional funds. In addition, NRC’s 1998 decommissioning regulations required licensees, beginning by the end of March 1999, to report to NRC, every 2 years, certain financial information that would ensure that licensees are collecting their required decommissioning funds. Information that must be provided in licensees’ financial reports includes (1) the amount of decommissioning funds estimated to be required according to NRC’s formula; (2) the funds accumulated as of the end of the year prior to the report date; (3) the annual amounts remaining to be collected; (4) the assumptions used to escalate decommissioning costs, project rates of earnings on investments of external sinking funds, and discount funding projections; and (5) modifications to external sinking fund agreements. Utility representatives have not opposed financial reporting. For example, the Edison Electric Institute told NRC that periodic reporting on the status of external sinking funds for decommissioning is appropriate. In addition, in commenting on the proposed regulations, a group of seven utilities stated that a comprehensive reporting requirement is long overdue and is particularly appropriate, given that economic regulators have not been actively monitoring the status of licensees’ external sinking funds on an ongoing basis. When NRC published its final regulations, it stated that, after licensees have submitted their initial reports by the end of March 1999, it would review the reports and consider whether to issue additional guidance on the format and content for subsequent licensee reports. Also, in June 1997, when NRC’s commissioners approved the proposed regulations for public comment, the commissioners stated that after NRC’s staff has reviewed licensees’ initial reports, the staff should advise the commissioners on the need for further rulemaking. When NRC issued the 1998 amendments to its decommissioning regulations, however, it did not explain when and how it intends to act on the financial information reported by individual licensees if that information does not clearly demonstrate that an individual licensee is accumulating decommissioning funds at a satisfactory rate. The lack of any criteria for acting on licensees’ decommissioning financial reports contrasts with the agency’s ongoing efforts to establish a more objective, understandable, and predictable approach to safety oversight of nuclear power plants. According to NRC, an independent regulatory oversight process is based on unbiased assessments of licensees’ performances; logical, coherent, and predictable actions by NRC; clear ties to NRC’s regulations and goals; and opportunities for public awareness of process results. The new safety oversight process should, according to NRC, allow for the integration of various information sources relevant to a licensee’s safety performance, make objective conclusions regarding the significance of the integrated performance information, take actions based on these conclusions in a predictable manner, and effectively communicate these results to the licensees and to the public. Therefore, NRC is in the process of establishing a new oversight approach in which it will, among other things, use indicators of nuclear power plants’ performance to establish thresholds for clearly identifying acceptable levels of performance. In conjunction with this, NRC plans to establish criteria for identifying and responding to unacceptable licensee performance. A similar approach in the area of providing adequate financial assurances for decommissioning would appear to offer the same benefits of objectivity and predictability that NRC seeks in its safety oversight of nuclear power plants. NRC’s new financial assurance regulations do not address the option of accelerating the rate at which licensees must accumulate decommissioning funds on the basis of the actual longevity of plants. NRC rejected this option because it believes that some plants will probably continue operating for their licensed operating period of up to 40 years and, with license extensions, beyond 40 years. Therefore, NRC said, requiring all licensees to accelerate their accumulation of decommissioning funds because of some premature plant retirements would be arbitrary and lead to widely varying effects on licensees. Thus, NRC intends to continue its practice of addressing early plant retirements on a case-by-case basis. NRC’s position, as expressed in the supplementary information accompanying the publication of its amended decommissioning regulations, is that accelerated funding is inequitable. NRC believes that accelerated funding places too much of the financial burden on current utility ratepayers and a lesser burden on ratepayers in the later years of a nuclear power plant’s operation. However, when licensees have retired plants before the plants’ operating license expired, the licensees’ electricity customers have had to pay decommissioning costs for plants from which they no longer receive electricity. The Trojan, Maine Yankee, and Zion cases, discussed earlier, demonstrate this fact. During the years that the Trojan and Zion plants operated, the respective licensees’ customers paid for less than half of the costs to decommission the plants. The customers of the Maine Yankee plant paid for 53 percent of the decommissioning cost. Now, although these retired plants no longer generate electricity, the current and future customers of the licensees will pay the remaining decommissioning costs without receiving comparable benefits from the plants. NRC elaborated on its reasons for opposing accelerated decommissioning funding in its comments on our draft report. NRC said that requiring accelerated funding for decommissioning would cause substantial cost increases to be incurred by either licensees’ stockholders or their ratepayers. Also, there would be a myriad of difficulties in determining the appropriate rate of acceleration; for example, at what rate should the collection of funds be accelerated? These issues, NRC added, were considered in its evaluation of accelerated funding as part of its process of amending its decommissioning regulations. NRC concluded that accelerated funding does not provide sufficiently increased decommissioning funding assurance commensurate with its potential cost impacts. State legislatures, state public utility commissions, and FERC appear to be addressing assurances for decommissioning funding in their electricity deregulation initiatives. Utility officials in Illinois, New Hampshire, and Oregon, for example, pointed out that laws in those states provide for the collection of necessary and prudent funds for decommissioning nuclear power plants regardless of whether the plants operate until their current licenses expire or are retired prematurely. Thus, licensees have continued collecting from electricity customers the fees earmarked for decommissioning three prematurely retired plants in Illinois and one in Oregon. Similar examples are occurring in California and Massachusetts. With respect to the bankruptcy of licensees, New York’s Public Service Commission, in commenting on NRC’s proposed amendments to its decommissioning regulations, urged NRC and states to consider proposing legislation that would make decommissioning liabilities a first priority in the event of the bankruptcy of a private nuclear facility owner. Current bankruptcy law does not make the subject of nuclear decommissioning costs a priority, but NRC has said it does enter bankruptcy proceedings to protect the integrity of decommissioning funding. Moreover, at NRC’s request, the Administration included a provision in its 1999 electricity deregulation bill that would give priority to funding decommissioning of nuclear power plants in bankruptcy proceedings involving licensees. Several factors have come together at this time that make it imperative for NRC to ensure that its licensees accumulate sufficient funds to decommission their plants regardless of when they are permanently shut down. Specifically, some licensees have not set aside sufficient amounts of funds for decommissioning, and there is uncertainty over the availability and affordability of the up-front payment methods of providing financial assurance. With electricity deregulation emerging, the possibility exists that a licensee may, in the future, prematurely retire a plant and be faced with paying the remaining decommissioning funds from its own resources. The ability of the licensee to do so might then depend upon its overall financial condition. Thus, self-guarantees that decommissioning funds will be available are only as good as the financial condition of the licensee. (We recognize that to date, early plant retirements have not resulted in a shortfall in decommissioning funds because regulators have allowed licensees to continue collecting funds after plants have been retired.) To NRC’s credit, it recognized its need to increase its oversight of decommissioning financial assurance when it modified its decommissioning regulations by requiring licensees to provide financial reports every 2 years. NRC did not, however, explain what it intends to do with these reports. For example, NRC did not establish the thresholds for clearly identifying acceptable levels of financial assurances or establish criteria for identifying and responding to unacceptable levels of assurances. In the absence of such explanations, there is no logical, coherent, and predictable oversight of licensees’ financial assurance for decommissioning their nuclear power plants. After NRC reviews licensees’ initial reports on decommissioning financial assurances, we recommend that the Chairman, NRC, provide licensees and the interested public with information on the (1) objectives, scope, and methodologies of NRC’s reviews of the reports; (2) thresholds for identifying, on the basis of these reviews, acceptable, questionable, and unacceptable indications of financial assurances; and (3) criteria for the actions to be taken on the results of these reviews. We provided NRC with a draft of our report for review and comment. NRC said that our recommendation merits serious consideration with respect to its future uses of licensees’ biennial reports on decommissioning funds. NRC added, however, that it is premature to expend significant staff resources on establishing thresholds for identifying problems with licensees’ financial assurances for decommissioning until NRC knows, on the basis of its reviews of the initial status reports from licensees, that such problems exist. Thus, NRC differs with us not on the substance of our recommendation but on the timing of its implementation. NRC’s position is that it does not need to establish performance thresholds unless actual performance problems exist. In our opinion, a proactive, rather than reactive, approach would more appropriately provide licensees and the public with a more complete understanding of NRC’s expectations in the area of financial assurance for decommissioning. NRC also stated that our report does not adequately represent the complex changes that are occurring in the electric utility industry and the interactions among NRC, state public utility commissions and FERC, and the nuclear power industry. According to NRC, a host of complex, interrelated variables must be analyzed before any threshold for determining funding shortfalls can be established. These variables include, NRC added, (1) the actual rates that licensees are accumulating for decommissioning funds, (2) the stated intents of rate regulators (such as state public utility commissions) on allowing the ultimate collection of decommissioning funds, (3) the provisions for decommissioning funding in state deregulation initiatives, and (4) for licensees no longer subject to the traditional regulation of their electricity rates, the extent to which the future collection of decommissioning funds may be based on non-bypassable wire charges. Where appropriate, we have either added NRC’s comments to, or revised the text of, our report. The full text of NRC’s written comments and our response appear in appendix II.
Pursuant to a congressional request, GAO provided information on the potential cost to decommission nuclear power plants and the implications of competition within the electricity industry, focusing on whether: (1) there is adequate assurance that the Nuclear Regulatory Commission's (NRC) licensees are accumulating sufficient funds for decommissioning; and (2) NRC is adequately addressing the effects of electricity deregulation on the funds that will eventually be needed for decommissioning. GAO noted that: (1) although the estimated cost to decommission a nuclear power plant is on the order of $300 million to $400 million in today's dollars, NRC does not know if licensees are accumulating sufficient funds for this future expense; (2) GAO's analysis showed that, under likely assumptions, 36 of 76 licensees had not accumulated sufficient decommissioning funds through 1997; (3) however, all but 15 of these 36 licensees appeared to be making up their funding shortfalls with recent increases in the rates that they are accumulating decommissioning funds; (4) using more pessimistic and optimistic assumptions would increase or decrease the number of underfunded licensees, respectively; (5) although utility commissions have permitted licensees to continue charging their customers for the costs of decommissioning prematurely-retired plants, this financial safeguard could be affected by states' efforts to deregulate the electricity industry; (6) to address the movement toward deregulating the electricity industry, in November 1998 NRC began requiring its licensees to provide additional financial assurances if the Federal Energy Regulatory Commission or state utility commissions will no longer guarantee, through the regulation of electricity rates, the collection of sufficient funds for decommissioning; (7) however, one additional form of financial assurance--the early payment of decommissioning costs--may not be practicable or affordable; (8) also, NRC considered requiring licensees to accelerate decommissioning funding as a hedge against the premature retirement of plants but rejected the concept because of possible adverse effects on licensees' finances; (9) on the other hand, NRC's alternative methods to the collection of decommissioning funds earlier essentially rely on the continued financial health of the licensee or its parent company; (10) thus, the effectiveness of NRC's 1998 regulatory changes will likely depend on how vigorously NRC monitors the financial health of its licensees; (11) in this regard, licensees must now provide financial reports every 2 years to NRC so it can monitor financial assurances for decommissioning; and (12) however, NRC did not establish thresholds for clearly identifying acceptable levels of financial assurances or establish criteria for identifying and responding to unacceptable levels of assurances.
In 1975 Congress created the FEC to administer and enforce the Federal Election Campaign Act. To carry out this role, FEC discloses campaign finance information, enforces provisions of the law such as limits and prohibitions on contributions, and oversees the public funding of presidential elections. Within FEC, the Office of Election Administration (OEA) serves as a national clearinghouse for information regarding the administration of federal elections. As such, OEA assists state and local election officials by developing voluntary voting equipment standards, responding to inquiries, publishing research on election issues, and conducting workshops on all matters related to election administration. In addition, it answers questions from the public and briefs foreign delegations on the U.S. election process, including voter registration and voting statistics. FEC consists of six voting members, appointed by the President and confirmed by the Senate. To encourage nonpartisan decisions, no more than three commissioners can be members of the same political party, and at least four votes are required for most official Commission actions. FEC’s budget for fiscal year 2001 is $40.4 million, and of that amount, $804,000 is allocated to support OEA functions. FEC has 357 full-time staff, of which 5 are allocated to OEA functions. The voting methods used in the United States can be placed into five categories: paper ballots, mechanical lever machines, punch cards, optical scan, and direct recording electronic. The last three methods use computer-based equipment. Three of the five—paper ballots, punch cards, and optical scan—use some kind of paper ballot to record voters’ choices. Paper Ballot. Voters use a paper ballot listing the names of the candidates and issues and record their choice by placing a mark in a box next to the candidate’s name or issue. After making their choices, the ballots are dropped into a sealed ballot box to be manually tabulated. Mechanical Lever. Voters pull a lever next to the candidate’s name or issue and the machine records and tallies the votes using a counting mechanism. Write-in votes must be recorded on a separate document. Election officials tally the votes by reading the counting mechanism totals on each lever voting machine. Punch Card. Voters can use one of two basic types of punch cards— Votomatic or Datavote. In both instances, voters use a computer-readable card to cast their vote. The Votomatic uses a computer-readable card with numbered boxes that correspond to a particular ballot choice. The choices corresponding to those numbered boxes are indicated to the voter in a booklet attached to a vote recording machine, with the appropriate places to punch indicated for each candidate and ballot choice. The voter uses a simple stylus to punch out the box corresponding to each candidate and ballot choice. In the Datavote, the names of the candidates and issues are printed on the card itself—there is no ballot booklet. The voter uses a stapler-like punching device to punch out the box corresponding to each candidate and ballot choice. To tally the votes in both instances, the ballots are fed into a computerized tabulation machine that records the vote by reading the holes in the ballots. Optical Scan. Voters use a computer-readable paper ballot listing the names of the candidates and issues. The voters record their choices by using an appropriate writing instrument to fill in a box or oval, or complete an arrow next to the candidate’s name or issue. The ballot is then fed into a computerized tabulation machine, which senses or reads the marks on the ballot, and records the vote. Direct Recording Electronic. Voters use a ballot that is printed and posted on the voting machine or displayed on a computer screen listing the names of the candidates and issues. Voters record their choices by pushing a button or touching the screen next to the candidate’s name or issue. When a voter is finished, the vote is submitted by pressing a vote button, which stores the vote in a computer memory chip. Election officials tally the votes by reading the votes totaled on each machine’s computer chip. While neither FEC nor any other federal agency has explicit statutory responsibility to develop voting equipment standards, the Congress has appropriated funds for FEC to develop and update the standards. FEC first issued voting equipment standards in 1990. These standards identify minimum functional and performance requirements for punch card, optical scan, and direct recording electronic voting equipment, and specify test procedures to ensure that the equipment meet these requirements.The functional and performance requirements address what voting equipment should do and delineate minimum performance thresholds, documentation provisions, and security and quality assurance requirements. The test procedures describe three stages of testing: qualification, certification, and acceptance. According to FEC’s standards document: Qualification testing is the process by which a voting equipment is shown to comply with the requirements of its own design specification and with the requirements of FEC standards. Certification testing, generally conducted by individual states, determines how well voting equipment conform to individual state laws and requirements. Acceptance testing is generally performed by the local jurisdictions procuring voting equipment and demonstrates that the equipment, as delivered and installed, satisfies all the jurisdiction’s functional and performance requirements. The standards are voluntary; states are free to adopt them in whole, in part, or reject them entirely. To date, 38 states require that voting equipment used in the state meet FEC standards either in total or in part.Figure 1 shows these states. In September 1997, FEC initiated efforts to evaluate its voting equipment standards and identify areas to be updated, and in July 1999, FEC initiated efforts to revise the standards. As part of this revision, FEC has been working closely with state and local election officials and vendors to incorporate industry comments on the draft standards. FEC plans to issue the revised standards in multiple volumes: volume I is to include the functional and performance requirements for voting equipment; volume II is to provide the detailed test procedures, including information to be submitted by the vendor, tests to be conducted to ensure compliance with the standards, and the criteria to be applied to pass the individual tests. Figure 2 depicts FEC’s time frames for revising the standards. Organizations such as the Department of Defense and the Institute of Electrical and Electronics Engineers have developed guidelines for various types of systems requirements and for the processes that are important to managing the development of any system throughout its life cycle. These types of systems requirements and processes include, for example: Security and Privacy Protection. Requirements defining the security/privacy environment, types of security needed (e.g., data confidentiality and fraud prevention), risks the system must withstand, safeguards required to withstand those risks, security/privacy policies that must be met, accountability (i.e., audit trails), and criteria for security certification. Human Factors. Requirements defining the usability of the system, including considerations for human capabilities and limitations, and the use and accessibility of the system by persons with disabilities. Documentation. Processes for recording information produced during the system development life cycle, which includes identifying documents to be produced; identifying the format, content, and presentation items for each document; and developing a process for reviewing and approving each document. Configuration Management. Processes to establish and maintain the integrity of work products through the system development life cycle, including developing a configuration management plan, identifying work products to be maintained and controlled, establishing a repository to maintain and control them, evaluating and approving changes to the work products, accounting for changes to the products, and managing the release and delivery of products. Quality Assurance. Processes to provide independent verification of the requirements and processes used to develop and produce the system, which include developing a quality assurance plan, determining what system development product and process standards are supposed to be followed, and conducting reviews to ensure that the product and process standards are followed. While FEC’s 1990 standards satisfy most of these areas, they do not satisfy all. For example, in the area of security, the standards do not address the security/privacy environment in which the voting equipment must operate, the types of security to be provided, the risks the equipment must withstand, the security/privacy policies that must be met, or the criteria for security certification. Further, the standards do not specify requirements for voting equipment usability, taking into account human capabilities and limitations, or the use and accessibility of the voting equipment by persons with disabilities. Table 1 summarizes the types of requirements and processes satisfied in FEC’s 1990 voting equipment standards. As part of FEC’s current effort to revise the 1990 standards, it has made improvements in all five of the areas in which we identified missing types of requirements and processes. For example, in the area of human factors, the draft standards now include requirements for the use and accessibility of voting equipment by persons with disabilities. Further, for documentation, the draft standards include requirements for identifying documents to be produced; defining the format, content, and presentation items for each document; and developing a process for reviewing and approving each document. In addition, in the area of security, the standards now address security types, risks, safeguards, policies, accountability, and certification. While FEC has made improvements, the draft standards do not satisfy two areas—human factors and quality assurance. For example, in the area of human factors, the draft standards do not address requirements for equipment usability, including considerations for human capabilities and limitations. Finally, the draft standards do not yet specify the development of a quality assurance plan or the performance of quality assurance reviews to ensure that the equipment development process requirements are being met. Table 2 summarizes the types of requirements and processes not satisfied in FEC’s 1990 voting equipment standards but satisfied in the draft standards. Appendix III provides a detailed description of the requirements and process types and our complete analysis of FEC’s 1990 voting standards and draft standards. In the area of quality assurance, FEC stated in its written comments on a draft of this report that its decision to not include quality assurance process reviews in the revised standards was the result of deliberative and collaborative interaction among NASED’s Voting System Committee and FEC staff. In addition, FEC did not include equipment usability because it was determined not to be an area of immediate concern by the election community during FEC’s evaluation of the standards to identify areas to be updated. FEC agrees that equipment usability should be addressed in the standards and has stated that it will fully do so once resources are available. Beyond this stated commitment, FEC has not established any specific plans or allocated specific resources for doing so. Until FEC addresses these missing requirements, the voting equipment standards’ value and utility will be diminished. Given the pace of today’s technological advances, standards must be proactively maintained to ensure that they remain current, relevant, and complete. Standards-setting bodies, such as the American National Standards Institute and the National Institute of Standards and Technology, require that standards be revised or reaffirmed at least once every 5 years. This is particularly important with voting equipment standards, which must respond to technological developments if they are to be current, complete, and relevant, and are to be useful to state and local election officials in assuring the public that their voting equipment are reliable. FEC has not proactively maintained its voting equipment standards. As previously stated, FEC is only now updating the 1990 standards. Because FEC has not proactively maintained the standards, they have become out of date. Vendors are using new technology and expanding voting equipment functions that are not sufficiently covered by the 1990 standards. For example, the 1990 standards do not address election management systems, which are used to prepare ballots and programs for use in casting and tallying votes, and to consolidate, report, and display election results. According to a NASED committee representative and the Independent Test Authority (ITA) responsible for testing election management systems, the lack of adequate standards to address election management software has forced them to interpret the current voting equipment standards to accommodate the development of this new software. Further, according to these representatives, these interpretations have not been documented and formally shared with FEC. As mentioned earlier, FEC is updating its standards, and the draft standards now address election management systems. FEC officials acknowledge the need to actively maintain the standards, but state that they have not done so because they have not been assigned explicit responsibility. By not ensuring that voting equipment standards are current, complete, and relevant, states may choose not to follow them, resulting in states adopting disparate standards. In turn, this could drive up the cost of voting equipment being designed to multiple standards and produce unevenness among states in the capabilities of voting equipment. No federal agency, including FEC, has been assigned explicit responsibility for testing voting equipment against FEC standards, and no federal agency has assumed this role. Rather, NASED has assumed responsibility for implementing the standards. To do so, NASED established a voting systems committee, which comprises selected state and local election officials and technical advisers. This committee accredits ITAs to test and qualify voting equipment against FEC standards. Figure 3 illustrates the voting equipment standards program, from the development of voting equipment standards through the testing and qualification of voting equipment. To accredit the ITAs, the NASED committee has developed requirements and procedures, which include provisions for NASED to periodically reaccredit the ITAs and conduct on-site inspection visits, both of which are important to ensuring that the accredited laboratories continue to comply with all requirements. To date, the committee has not reaccredited or inspected ITAs because, according to NASED committee representatives, they rely on the committee’s technical advisers’ ongoing conversations with ITA officials and the officials’ participation in committee meetings to ensure that the ITAs are fulfilling their responsibilities effectively. Currently, three ITAs are approved to test voting equipment against the FEC standards. In 1994, the NASED committee accredited Wyle Labs to test the hardware and machine-resident software components of proprietary vote cast and tally equipment. In February 2001, Metamor (previously PSINet) applied for accreditation to conduct qualification testing of vote tabulation and election management software. Also in 2001, SysTest applied for accreditation to conduct qualification testing of voting tabulation and election management software. While both Metamor and SysTest have been granted an interim approval to test voting equipment, NASED has not yet accredited either. To test voting equipment, voting equipment vendors submit requests for testing to the ITAs, who then prepare a test procedure. The test procedure details the software and hardware testing requirements that the voting equipment will be tested against and is based on both the FEC voting equipment standards and the vendors’ design specifications. According to ITA officials responsible for testing voting equipment, the testing process is generally an iterative one. Vendors are provided an opportunity to correct deficiencies identified during testing and resubmit the modified voting equipment for retesting. At the end of testing, the ITA completes a test report and notifies the Election Center that the voting equipment has successfully satisfied testing requirements. The Election Center then assigns a NASED number to the specific equipment model and firmware release that was tested and maintains the list of qualified voting equipment. Each time a vendor issues a new model or software release, the vendor is to submit a request for testing to the ITAs in order to qualify the new model or release. As of July 3, 2001, NASED had qualified 21 models of voting equipment and 7 election management systems, representing 10 vendors. See table 3 for a breakout of the types of equipment qualified. The ITAs stated that the testing process generally takes about 2 to 3 months. This is contingent, however, upon the vendors having the proper documentation in order. If documentation is missing or incomplete, the process may take longer. According to the ITAs, the cost of qualification testing ranges from $40,000 for vote cast and tally equipment to $75,000 for vote tabulation and election management software. While not explicitly provided for in legislation, FEC and NASED have assumed and are performing important roles by developing voting equipment standards and testing and qualifying equipment against these standards, respectively. Given the current pace of technological change for voting equipment, the degree to which these standards are actively maintained and the extent to which they are appropriately applied, can have a direct bearing on the capabilities of voting equipment. This, in turn, can affect the successful conduct of national, state, and local elections. Therefore, it is important that responsibility for these roles be clearly assigned. By doing so, the appropriate federal role in these important areas can be deliberated, decided, and explicitly defined, thereby avoiding another situation where the standards are allowed to become out of date. It is also important that these roles be executed effectively. In the case of FEC’s ongoing update of the standards, this means that requirements for equipment usability, and quality assurance should be developed. As part of the ongoing debate and deliberation over election reform in general, and the federal role in voting equipment standards in particular, the Congress may wish to consider assigning explicit federal authority, responsibility, and accountability for voting equipment standards, including proactive and continuous update and maintenance of the standards. Given that no federal or state entity has been assigned explicit authority or responsibility for testing voting equipment against the FEC standards, the Congress may wish to consider what, if any, federal role is appropriate, regarding implementation of the standards, including the accreditation of ITAs and the qualification of voting equipment. To improve the quality of FEC’s voting equipment standards, we recommend that the FEC Commissioners direct the OEA Director to accelerate the development of requirements for equipment usability, including considerations for human capabilities and limitations. To improve the quality of FEC’s current efforts to update the voting equipment standards, we also recommend that the FEC Commissioners direct the OEA Director to develop requirements for quality assurance, including developing a quality assurance plan and conducting quality assurance process reviews. In its written comments on a draft of this report (reprinted in appendix II), the FEC Chairman and Vice Chairman stated that FEC generally agrees with most of our observations and recommendations, including that human factors are not being addressed in the revised voting equipment standards and that FEC needs to accelerate their development in future iterations of the standards. Additionally, FEC agreed with our matter for congressional consideration. Nevertheless, FEC commented that it was concerned with the report’s portrayal of the Commission as being insufficiently proactive in revising voting equipment standards, stating that its efforts have been as timely as possible given certain practical constraints, which it described in a chronology of events and circumstances. FEC also commented that it disagrees with the draft report’s characterization of the Commission’s ongoing efforts to update security and quality assurance standards as incomplete, describing how both areas are being addressed. Subsequent to providing us with its written comments on a draft of this report, FEC also provided us with additional draft standards that address security requirements. Accordingly, we have modified this report, including our recommendations, to reflect this new information. We do not agree with either of FEC’s other two points of concern. Regarding FEC’s concern with the report’s portrayal of the Commission as being insufficiently proactive in revising voting equipment standards, FEC states in its comments that 7 years elapsed from the time that the standards were first issued in 1990 to the time that FEC first began evaluating them to identify areas that needed to be updated. Further, it states that another 2 years elapsed between the time FEC began evaluating the standards and the time it began updating them. We recognize that FEC is performing, through its own initiative, an important role in developing and updating the standards, and deserves credit for doing so. However, in our view, allowing 9 years to pass before beginning to update the standards, regardless of the practical circumstances that FEC cites, is too long and does not constitute a proactive maintenance process and is the primary reason that the current standards are out of date. Regarding FEC’s disagreement with the report’s characterization of the Commission’s ongoing efforts to update quality assurance standards as incomplete, we do not question, and in fact state in this report, that the draft standards address requirements for quality assurance. However, our main concern is that important and relevant aspects of quality assurance requirements, such as quality assurance plans and process reviews, respectively, are not addressed. Concerning FEC’s decision to omit quality assurance standards areas from the revised draft standards, we modified this report to reflect FEC’s position that its decision resulted from deliberative and collaborative interaction among NASED and FEC staff and was not, as we were told during the course of our review by the OEA Director, areas that were overlooked. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Appropriations Subcommittee on Treasury and General Government and the House Appropriations Subcommittee on Treasury, Postal Service, and General Government; the Director of the Office of Management and Budget; and the Chairman and Vice Chairman of FEC. Copies will also be available at our Web site at www.gao.gov. If you have any questions, please contact me at (202) 512- 6240 or by email at [email protected]. Key contributors to this assignment were Deborah A. Davis, Richard Hung, and Eric Winter. The objectives of our review were to (1) identify Federal Election Commission’s (FEC) role regarding voting equipment and assess how well FEC is fulfilling its role and (2) identify the National Association of State Election Director’s (NASED) process for testing and qualifying voting equipment against FEC’s voluntary voting equipment standards. To identify FEC’s role regarding voting equipment, we researched FEC’s statutory and legislative role in developing and maintaining voting equipment standards. To further identify FEC’s role, we reviewed relevant documents, including the Plan to Update the Voting Systems Standards,the standards update project contract, project work plans, and legislative proposals, and interviewed key FEC officials, including the Director, OEA. To assess FEC’s voting equipment standards, we examined relevant guidelines and procedures for the development of system requirements. Specifically, we examined the Department of Defense’s Data Item Description for System/Subsystem Specifications; the Institute of Electrical and Electronics Engineers’ Standard 12207 on Software Life Cycle Processes, and the Software Engineering Institute’s Software Development Capability Maturity Model™ and identified 13 types of systems requirements and 3 supporting life-cycle processes that are important in the development of any system. We then compared these types of requirements and processes against FEC’s 1990 voting equipment standards to determine if all key elements were addressed. In those areas where variances were noted, we compared the types of requirements and processes against relevant sections of volumes I and II of the draft standards to determine whether FEC had addressed any of these missing requirements. We only reviewed those portions of the draft standards for which we identified missing types of requirements and processes in the 1990 standards. In addition, our review of the standards did not include validating that the requirements are correct and complete beyond determining whether the standards addressed all of the requirements and process key elements. To identify NASED’s process for testing and qualifying voting equipment against FEC’s voting equipment standards, we interviewed officials from NASED, the Election Center, and the two independent test authorities (ITA). We also reviewed documentation describing NASED’s process, NASED’s Accreditation of Independent Testing Authorities For Voting System Qualification Testing Handbook, ITAs’ generic test plans, and NASED’s policies, procedures, and by-laws. We also provided a copy of relevant parts of this report to the Chairman of the NASED Voting System Committee for comment. The Chairman stated that the report accurately reflected the NASED process. We also contacted officials in the State Election Director's offices in each of the 50 states and the District of Columbia to determine which states required that their voting equipment be in compliance with FEC's standards. We did not verify the officials' responses. We performed our work at FEC headquarters in Washington, D.C., NASED, the Election Center, and the independent test authorities from March 2001 through September 2001, in accordance with generally accepted government auditing standards. The following are GAO’s comments on the Federal Election Commission letter dated July 18, 2001. 1. See comments 2, 5, and 6. 2. We do not dispute either the chronology of events provided in FEC’s comments or its statement that it does not have explicit statutory authority to develop and revise the standards. We provide the relevant elements of this chronology in this report. Additionally, we state in this report that FEC has assumed and is performing an important role by developing and revising the standards, despite its lack of explicit statutory responsibility. We do not agree with FEC’s comment that it has been proactive in updating the voting equipment standards. As FEC acknowledges in its comments, 7 years elapsed from the time the standards were first issued in 1990 to the time that FEC initiated efforts to assess the standards to identify areas that needed to be updated. During that time, considerable experience with the standards was accumulating, as vendors were developing voting equipment to meet the FEC standards and ITAs were testing against them. Since then, additional experience has been gained with the standards as vendors have continued to develop voting equipment to meet the standards, and ITAs have continued to test vendors’ equipment against the standards. For example, we state in this report that ITAs have had to interpret the 1990 standards in the testing process to accommodate vendors’ use of new technologies and expanded equipment functions that are not addressed in the 1990 standards. However, FEC does not formally receive these interpretations, any one of which could be the basis for prompting an update to the standards. In our view, waiting 9 years to begin updating the standards is too long, does not constitute proactive maintenance, and is the primary reason that the current standards are out of date. 3. FEC is correct in stating that we did not assess all of the revised draft standards areas. However, we disagree that this assessment approach ignores the collaborative and dynamic process of NASED’s Voting Systems Committee and FEC’s staff in overseeing the development of the standards for two reasons. First, this report recognizes that FEC worked closely with state and local election officials in revising the standards. Second, this joint FEC and NASED process has no relevance to our findings that certain standards areas do not address the full range of items associated with well-defined system requirements in these areas. As we state in the objectives, scope, and methodology section of this report, our approach was to assess all of the 1990 standards because they are the standards against which voting equipment are currently being developed and independently tested. In assessing drafts of the updated standards, we assumed that those areas in the 1990 standards that we found to be satisfactory would continue to be satisfactory in the updated standards. As long as our findings are limited to the standards area that we assessed, the issue of whether we assessed all or some of the draft standards is not relevant. 4. We acknowledge that FEC’s position, as stated in its comments, concerning standards areas omitted from the revised draft standards is that these were based on decisions resulting from deliberative and collaborative interaction among NASED and FEC staff, and were not, as we were told during the course of our review by the OEA Director, areas that were overlooked. Accordingly, we have modified our report to reflect this position. 5. Subsequent to providing us written comments on a draft of this report, FEC provided us with a copy of volume II of the standards, which includes the tests to be conducted to ensure compliance with the voting equipment standards. Based on our review of the relevant security sections, the standards satisfy the requirement for security certification. We have modified our report, including the recommendations, to reflect this new information. 6. We do not disagree that the draft standards discuss quality assurance and have been strengthened from the 1990 standards. We acknowledge these improvements to the standards in our report. However, as we state, quality assurance includes a number of activities. While FEC’s draft standards include some of these elements, they do not include all of them. Specifically, the draft standards do not address requirements for developing a quality assurance plan and conducting process reviews to ensure that the product and process standards are followed. We identified 13 types of system requirements and 3 supporting life-cycle processes that are often associated with complete system requirements. FEC’s 1990 voting equipment standards satisfied 11 of the 13 system requirements areas and none of the life-cycle processes. We reviewed FEC’s draft standards for those areas for which we identified variances in the 1990 standards and found that the draft standards had made improvements in all five areas. However, the draft standards still do not satisfy human factors and quality assurance. A detailed description of the system requirements areas and our complete analysis follow. Definition/analysis Required system capabilities based on the purpose of the system; also includes parameters for response times, accuracy, capacities, unexpected/unallowed conditions, error-handling, and continuity of operations. 1990 analysis: Identified areas include ballot definition, candidate/measure selection, vote casting, ballot interpretation, voting reports, accuracy and integrity, processing speed, response times, and error and status messages. Quantitative measures of quality including reliability (perform correctly and consistently), maintainability (easily serviced/repaired/corrected), and availability (accessibility to be operated when needed). 1990 analysis: All identified areas included. Requirements for maintaining a secure system and protecting data privacy, including (1) security/privacy environment in which system must operate; (2) types of security to be provided (e.g., data confidentiality and fraud prevention); (3) risks the system must withstand; (4) safeguards required; (5) security/privacy policies that must be met; (6) accountability the system must provide (i.e., audit trails); and (7) criteria for security certification. 1990 analysis: Access control identified as a security safeguard, and requirements defined for audit records produced by the system to provide accountability. The other areas, however, are not addressed. Draft analysis: In addition to access controls and audit records, the security/privacy environment, the types of security to be provided, the risks the system must withstand, safeguards necessary, security policies, and criteria for security certification are identified. Requirements defining system usability of the system that take into account human capabilities and limitations, along with use and accessibility by persons with disabilities. 1990 analysis: System usability and accessibility by persons with disabilities are not identified. Draft analysis: Requirements for the use and accessibility by persons with disabilities are identified. System usability requirements are not. Characteristics of the interface between the voting system and other systems, including data types, data formats, and timing. 1990 analysis: Removable storage media, communications devices, and printers identified as external interfaces. Draft Standards satisfied? Yes Draft Standards satisfied? Definition/analysis Requirements for system configuration to meet local operational requirements. 1990 analysis: Requirements defined for voting systems programming in accordance with ballot requirements of the election and the jurisdiction in which the equipment will be used. The natural environment that the system must withstand during transportation, storage, and operation, including (1) temperature, (2) humidity, (3) rain, and (4) motion/shock. 1990 analysis: Requirements identified for temperature, humidity, rain, transit drop, and vibration. Any commercial standards that must be used in the system’s development. 1990 analysis: Vendors are instructed to design equipment in accordance with best commercial and industrial practice; software is to be designed in a modular fashion, preferably using a high-level programming language. The system’s physical characteristics, including size, weight, color, nameplates, markings of parts and serial/lot numbers, transportability, and parts interchangeability. 1990 analysis: All requirements identified. Requirements for preventing or minimizing unintended hazards to personnel, property, and the physical environment. 1990 analysis: All systems shall be designed to meet the requirements of the Occupational Safety and Health Administration Requirements for who will use or support the system, such as number of workstations and built-in help/training features. 1990 analysis: Vendors instructed to include information on number of personnel and skill level required to maintain the voting system. Requirements for training devices and materials to be included with the system. 1990 analysis: Vendors instructed to document information required for system use and operator training, and orientation and training of poll workers, user maintenance technicians, and vendor personnel. Requirements for system maintenance, software support, and system transportation. 1990 analysis: Vendors instructed to document information required in these three areas. Yes Draft Standards satisfied? Definition/analysis The process of recording information produced during the life-cycle process. Describes and records information about a product, the processes used to develop the product, and provides a history of what happened during the development and maintenance of the product. Includes (1) identification of documents to be produced and delivered to customer or tester, (2) identification of format, content, and presentation items for each document, and (3) review and approval process for each document. 1990 analysis: Requirements identify products to be produced, including the content and format of the documents. Review and approval process not specified. Draft analysis: Products to be produced, including the content and format of the documents, as well as the review and approval process is identified. The process to establish and maintain the integrity of work products throughout the life-cycle process; it involves establishing product baselines and systematically controlling changes to them. The process should include (1) developing a configuration management plan, (2) identifying work products to be maintained and controlled, (3) establishing a repository to maintain and control the work products, (4) evaluating and approving changes to the products, (5) accounting for changes to the work products, and (6) managing the release and delivery of them. 1990 analysis: Includes requirements for (1) identifying work products to be maintained and controlled, (2) evaluating and approving changes to the products, and (3) managing the release and delivery of work products. The standards do not include requirements for developing a configuration management plan, establishing a repository to maintain and control the work products, and accounting for changes to the work products. Draft analysis: All areas identified. The process that provides adequate assurance of the system development process. It typically involves independent review of work products and activities to ensure compliance with applicable development standards and procedures. The process should include (1) developing a quality assurance plan, (2) determining system development product and process standards to be followed, and (3) conducting reviews to ensure that the product and process standards are followed. 1990 analysis: None of these areas specified. Draft analysis: The need to document the hardware and software development process is specified, but a quality assurance plan and quality assurance reviews are not.
Events surrounding the last presidential election raised concerns about the people, processes, and technology used to administer elections. GAO has already reported on the scope of congressional authority in election administration and voting assistance to military and overseas citizens. This report focuses on the status and use of federal voting equipment standards, which define minimum functional and performance requirements for voting equipment. The standards define minimum life-cycle management processes for voting equipment developers to follow, such as quality assurance. No federal agency has been assigned explicit statutory responsibility for developing voting equipment standards; however, the Federal Election Commission (FEC) developed voluntary standards for computer-based systems in 1990, and Congress has provided funding for this effort. No federal agency is responsible for testing voting equipment against the federal standards. Instead, the National Association of State Election Directors accredits independent test authorities who test voting equipment against the standards.
The key objectives of U.S. public diplomacy are to engage, inform, and influence overseas audiences. Public diplomacy is carried out through a wide range of programs that employ person-to-person contacts; print, broadcast, and electronic media; and other means. Traditionally, the State Department’s efforts have focused on foreign elites—current and future overseas opinion leaders, agenda setters, and decision makers. However, the dramatic growth in global mass communications and other trends have forced a rethinking of this approach, and State has begun to consider techniques for communicating with broader foreign audiences. Since the terrorist attacks of September 11, 2001, State has expanded its public diplomacy efforts globally, focusing particularly on countries in the Muslim world considered to be of strategic importance in the war on terror. In May 2006, we reported that this trend continued with funding increases of 25 percent for the Near East and 39 percent for South Asia from 2004 to 2006. The BBG supports U.S. public diplomacy’s key objectives by broadcasting news and information about the United States and world affairs and serving as a model of how a free press should operate. The BBG manages and oversees the Voice of America (VOA), Radio/TV Marti, Radio Free Europe/Radio Liberty, Radio Free Asia, Radio Farda, Radio Sawa, and the Alhurra TV Network. As shown in figure 1, State and the BBG spent close to $1.5 billion on public diplomacy programs in fiscal year 2006. As others have previously reported, in recent years anti-American sentiment has spread and intensified around the world. For example, the Pew Global Attitudes Project has found that the decline in favorable opinion of the United States is a worldwide trend. For instance, favorable attitudes toward the United States in Indonesia declined from 75 percent in 2000 to 30 percent in 2006 and from 52 percent to 12 percent over the same time period in Turkey. While individual opinion polls may reflect a snapshot in time, consistently negative polls may reflect the development of more deeply seated sentiments about the United States. Numerous experts, expert groups, policymakers, and business leaders have expressed concerns that anti-Americanism may harm U.S. interests in various ways. In its 2004 report on strategic communication, the Defense Science Board states that “damaging consequences for other elements of U.S. soft power are tactical manifestations of a pervasive atmosphere of hostility.” Similarly, the Council on Foreign Relations has claimed that the loss of goodwill and trust from publics around the world has had a negative impact on U.S. security and foreign policy. Anti-American sentiments may negatively affect American economic interests, U.S. foreign policy and military operations, and the security of Americans. According to Business for Diplomatic Action, anti-Americanism can hurt U.S. businesses by causing boycotts of American products, a backlash against American brands, increased security costs for U.S. companies, higher foreign opposition to U.S. trade policies, and a decrease in the U.S.’s ability to attract the world’s best talent to join the American workforce. Additionally, a report from the Princeton-based Working Group on Anti-Americanism generally echoes the possibility that anti- Americanism may harm U.S. business interests in these same areas. Further, as reported by the Travel Business Roundtable during previous hearings before this subcommittee, the U.S. travel industry has reported significant declines in the U.S. market share of the worldwide travel market and a decline in overseas visitors to the United States since 9/11. Further, the State Department’s 2003 report on Patterns of Global Terrorism recorded 67 attacks on American business facilities and 7 business casualties. In 2006, the Overseas Security Advisory Council noted that more threats against the private sector occurred in 2006 than in 2004 or 2005 in most of the industries it reports on. Finally, the Working Group on Anti-Americanism also indicated that threats to American private property and personnel working overseas have become constant in some regions, especially the Middle East, and have resulted in significantly increased security costs. According to the Defense Science Board, the Brookings Institution, and others, anti-Americanism around the world may reduce the U.S.’s ability to pursue its foreign policy goals, including efforts to foster diplomatic relationships with other foreign leaders and to garner support for the global war on terror. For instance, in October 2003, the Advisory Group on Public Diplomacy for the Arab and Muslim World reported that “hostility toward the U.S. makes achieving our policy goals far more difficult.” Specifically, according to a paper from the Working Group on Anti- Americanism, foreign leaders may seek to leverage anti-American sentiment in pursuit of their own political goals, which may then limit their future support for U.S. foreign policy. As these leaders achieve personal political successes based on their opposition to the United States, they may then be less likely to support U.S. foreign policy going forward. Further, the 9/11 Commission, the Council on Foreign Relations, and others have reported on the possibility that anti-Americanism may also serve as a barrier to success in the global war on terror and related U.S. military operations. Specifically, the 9/11 Commission report of July 2004 stated that perceptions of the United States’ foreign policies as anti-Arab, anti-Muslim, and pro-Israel have contributed to the rise in extremist rhetoric against the United States. Further, the Council on Foreign Relations has argued that increasing hostility toward America in Muslim countries facilitates recruitment and support for extremism and terror. The Council on Foreign Relations also has identified potential consequences of anti-Americanism on the security of individual Americans, noting that Americans now face an increased risk of direct attack from individuals and small groups that wield increasingly more destructive power. According to State’s Country Reports on Terrorism for 2005, 56 private U.S. citizens were killed as a result of terrorism incidents in 2005. The Working Group on Anti-Americanism suggests that there is some correlation between anti-Americanism and violence against Americans in the greater Middle East but notes that the relationship is complex. For example, they note that while increased anti-Americanism in Europe or Jordan has not led to violence against Americans or U.S. interests in those areas, it does seem to play a role in fueling such violence in Iraq. Other factors, such as the visibility of Americans overseas, particularly in Iraq; the role of the media in supporting anti-Americanism; and the absence of economic security may also contribute to this violence. While all of the topics discussed here represent areas in which anti- Americanism may have negative consequences, the empirical evidence to support direct relationships is limited. As such, we cannot confirm any causal relationships between negative foreign public opinion and specific negative outcomes regarding U.S. interests. Despite the fact that we cannot draw a direct causal link between anti-Americanism and specific outcomes in these areas, it is clear that growing negative foreign public opinion does not help the United States achieve its economic, foreign policy, and security goals, and therefore U.S. public diplomacy efforts, which seek to counter anti-Americanism sentiment, have a critical role to play in supporting U.S. interests throughout the world. Over the past 4 years, we have identified and made recommendations to State and the BBG on a number of issues related to a general lack of strategic planning, inadequate coordination of agency efforts, and problems with measuring performance and results. Among other things, we have recommended that (1) communication strategies be developed to coordinate and focus the efforts of key government agencies and the private sector, (2) the State Department develop a strategic plan to integrate its diverse efforts, (3) posts adopt strategic communication best practices, and (4) meaningful performance goals and indicators be established by both State and the BBG. Currently, the U.S. government lacks an interagency public diplomacy strategy; however, such a plan has been drafted and will be released shortly. While the department has articulated a strategic framework to direct its efforts, comprehensive guidance on how to implement this strategic framework has not yet been developed. In addition, posts generally do not pursue a campaign-style approach to communications that incorporates best practices endorsed by GAO and others. State has begun to take credible steps towards instituting more systematic performance measurement practices, consistent with recommendations GAO and others have made. Finally, although the BBG has added audience size as a key performance measure within its strategic plan, our latest review of MBN’s operations call into question the potential value of this measure due to various methodological concerns. In 2003, we reported that the United States lacked a governmentwide, interagency public diplomacy strategy, defining the messages and means for communication efforts abroad. We reported since then that the administration has made a number of unsuccessful attempts to develop such a strategy. The lack of such a strategy complicates the task of conveying consistent messages and therefore increases the risk of making damaging communication mistakes. State officials have said that it also diminishes the efficiency and effectiveness of governmentwide public diplomacy efforts, while several reports have concluded that a strategy is needed to synchronize agencies’ target audience assessments, messages, and capabilities. On April 8, 2006, the President established a new Policy Coordination Committee on Public Diplomacy and Strategic Communications. This committee, led by the Under Secretary for Public Diplomacy and Public Affairs, intends to better coordinate interagency activities, including the development of an interagency public diplomacy strategy. We have been told this strategy is still under development and will be issued soon. The U.S. government also lacks a governmentwide strategy and meaningful methods to ensure that recipients of U.S. foreign assistance are consistently aware that the aid comes from the United States. In March 2007, we reported that most agencies involved in foreign assistance activities had established some marking and publicity requirements in their policies, regulations, and guidelines, and used various methods to mark and publicize their activities. However, we identified some challenges to marking and publicizing U.S. foreign assistance, including the lack of a strategy for assessing the impact of marking and publicity efforts on public awareness and the lack of governmentwide guidance for marking and publicizing U.S. foreign aid. To better ensure that recipients of U.S. foreign assistance are aware that the aid is provided by the United States and its taxpayers, we recommended that State, in consultation with other U.S. government agencies, (1) develop a strategy to better assess the impact of marking and publicity programs on public awareness and (2) establish interagency agreements for marking and publicizing all U.S. foreign assistance. State indicated that the interagency public diplomacy strategy will address assessment of marking and publicity programs and will include governmentwide marking and publicity guidance. In 2005, we noted that State’s efforts to engage the private sector in pursuit of common public diplomacy objectives had met with mixed success and recommended that the Secretary develop a strategy to guide these efforts. Since then, State has established an Office of Private Sector Outreach, is partnering with individuals and the private sector on various projects, and hosted a Private Sector Summit on Public Diplomacy in January 2007. However, State has not yet developed a comprehensive strategy to guide the Department’s efforts to engage the private sector. In 2005, the Under Secretary established a strategic framework for U.S. public diplomacy efforts, which includes three priority goals: (1) offer foreign publics a vision of hope and opportunity rooted in the U.S.’s most basic values; (2) isolate and marginalize extremists; and (3) promote understanding regarding shared values and common interests between Americans and peoples of different countries, cultures, and faiths. The Under Secretary noted that she intends to achieve these goals using five tactics—engagement, exchanges, education, empowerment, and evaluation—and by using various public diplomacy programs and other means, including coordinating outreach efforts with the private sector. This framework partially responds to our 2003 recommendation that State should develop and disseminate a strategy to integrate its public diplomacy efforts and direct them toward achieving common objectives. State has not yet developed written guidance that provides details on how these five tactics will be used to implement the Under Secretary’s priority goals. However, it should be noted that the Under Secretary has issued limited guidance regarding the goal of countering extremism to 18 posts selected to participate in a pilot initiative focusing on this objective. We have recommended that State, where appropriate, adopt strategic communication best practices (which we refer to as the “campaign-style approach”) and develop country-specific communication plans that incorporate the key steps embodied in this approach. As shown in figure 2, these steps include defining the core message, identifying and segmenting target audiences, developing detailed communication strategies and tactics, and using research and evaluation to inform and re-direct efforts as needed. As noted in our May 2006 report, our review of public diplomacy operations in Nigeria, Pakistan, and Egypt in 2006 found that this approach and corresponding communication plans were absent. Rather, post public diplomacy efforts constituted an ad hoc collection of activities designed to support such broad goals as promoting mutual understanding. In a recent development, 18 posts participating in the department’s pilot countries initiative have developed country-level plans focusing on the countering extremism goal. These plans were developed on the basis of a template issued by the Under Secretary that requires each post to provide a list of supporting objectives, a description of the media environment, identification of key target audiences, and a list of supporting programs and activities. We reviewed most of the plans submitted in response to this guidance. Although useful as a high-level planning exercise, these plans do not adhere to the campaign-style approach, which requires a level of rigor and detail that normally exceeds the three- to four-page plans produced by posts in pilot countries. The plans omit basic elements, such as specific core messages and themes or any substantive evidence that proposed communication programs were driven by detailed audience research—one of the key principles embodied in the campaign-style approach. In the absence of such research, programs may lack important information about appropriate target audiences and credible messages and messengers. Based on prior reports by GAO and others, the department has begun to institute a more concerted effort to measure the impact of its programs and activities. The department created (1) the Office of Policy, Planning, and Resources within the office of the Under Secretary; (2) the Public Diplomacy Evaluation Council to share best practices; and (3) a unified Public Diplomacy Evaluation Office. The Department established an expanded evaluation schedule that is designed to cover all major public diplomacy programs. The department also has called on program managers to analyze and define their key inputs, activities, outputs, outcomes, and impact to help identify meaningful performance goals and indicators. Finally, the department recently launched a pilot public diplomacy performance measurement data collection project that is designed to collect, document, and quantify reliable annual and long-term outcome performance measures to support government reporting requirements. In 2001, the BBG introduced a market-based approach to international broadcasting that sought to “marry the mission to the market.” This approach was designed to generate large listening audiences in priority markets that the BBG believes it must reach to effectively meet its mission. Implementing this strategy has focused on markets relevant to the war on terrorism, in particular in the Middle East through such key initiatives as Radio Sawa and the Alhurra TV network. The Board’s vision is to create a flexible, multimedia, research-driven U.S. international broadcasting system. We found that the BBG’s strategic plan to implement its new approach did not include a single goal or related program objective designed to gauge progress toward increasing audience size, even though its strategy focuses on the need to reach large audiences in priority markets. The BBG subsequently created a single strategic goal to focus on the key objective of maximizing impact in priority areas of interest to the United States and made audience size a key performance measure. However, in our August 2006 review of the Middle East Broadcasting Networks, we found that methodological concerns call into question the potential accuracy of this key performance measure with regard to Radio Sawa’s listening rates and Alhurra’s viewing rates. Specifically, we found that weaknesses in the BBG’s audience surveys create uncertainty over whether some of Radio Sawa’s or Alhurra’s performance targets for audience size have been met. We recommended that the BBG improve its audience research methods, including identifying significant methodological limitations. The BBG accepted our recommendation and has informed us that it is currently considering how it will do so. Public diplomacy efforts in the field face several other challenges. Beginning with our September 2003 report on State’s public diplomacy efforts, post officials have consistently cited several key challenges, including a general lack of staff, insufficient administrative support, and inadequate language training. Furthermore, public diplomacy officers struggle to balance security with public access and outreach to local populations. Finally, the BBG’s disparate organizational structure has been viewed as a key management challenge that significantly complicates its efforts to focus and direct U.S. international broadcasting efforts. Although several recent reports on public diplomacy have recommended an increase in U.S. public diplomacy program spending, several embassy officials stated that, with current staffing levels, they do not have the capacity to effectively utilize increased funds. According to State, the Department had 887 established public diplomacy positions (overseas and domestic) as of March 31, 2007, but 199, or roughly 22 percent, were vacant. Compounding this challenge is the loss of public diplomacy officers to temporary duty in Iraq, which, according to one State official, has drawn down field officers even further. Staffing shortages may also limit the amount of training public diplomacy officers receive. State is repositioning several public diplomacy officers as part of its transformational diplomacy initiative. However, this effort represents shifting existing public diplomacy officers and does not increase the overall number of officers, which we have noted were generally the same in fiscal years 2004 and 2006. In addition, public diplomacy officers at posts are burdened with administrative tasks, and thus have less time to conduct public diplomacy outreach activities than they did previously. One senior State official said that administrative duties, such as budget, personnel, and internal reporting, compete with officers’ public diplomacy responsibilities. Another official in Egypt stated that she rarely had enough time to strategize, plan, or evaluate her programs. These statements echo comments we heard during overseas fieldwork and in a survey for our 2003 report. In that survey, officers stated that, although they manage to attend public outreach and other functions within their host country capitals, it was particularly difficult to find time to travel outside the capitals to interact with other communities. This challenge is compounded at posts with short tours of duty, including many tours in the Muslim world, as officials stated that it is difficult to establish the type of close working relationships essential to effective public diplomacy work when they are in country for only a short time. In our May 2006 report, we reported that the average length of tour at posts in the Muslim world is about 22 percent shorter than tour lengths elsewhere. Noting the prevalence of 1-year tours in the Muslim world, a senior official at State said that public affairs officers who have shorter tours tend to produce less effective work than officers with longer tours. To address these challenges, we recommended in 2003 that the Secretary of State designate more administrative positions to overseas public affairs sections to reduce the administrative burden. Officials at State said that the Management bureau is currently considering options for reducing the administrative burden on posts, including the development of centralized administrative capabilities offshore. In August 2006, GAO reported that the State Department continued to experience significant foreign language proficiency shortfalls in countries around the world. Our May 2006 report noted this problem was particularly acute at posts in the Muslim world where Arabic—classified as a “superhard” language by State—predominates. In countries with significant Muslim populations, we reported that 30 percent of language- designated public diplomacy positions were filled by officers without the requisite proficiency in those languages, compared with 24 percent elsewhere. In Arabic language posts, about 36 percent of language- designated public diplomacy positions were filled by staff unable to speak Arabic at the designated level. In addition, State officials said that there are even fewer officers who are willing or able to speak on television or engage in public debate in Arabic. The information officer in Cairo stated that his office does not have enough Arabic speakers to engage the Egyptian media effectively. Figure 3 shows the percentage of public diplomacy positions in the Muslim world staffed by officers meeting language requirements. State has begun to address these language deficiencies by increasing its overall amount of language training and providing supplemental training for more difficult languages at overseas locations. State has also made efforts to ensure that its public diplomacy staff receive appropriate language training. For example, State’s Foreign Service Institute recently offered a week of intensive media training for language-qualified officers that provided guidance on how to communicate with Arabic-speaking audiences. Security concerns have limited embassy outreach efforts and public access, forcing public diplomacy officers to strike a balance between safety and mission. Shortly after the terrorist attacks of September 11, 2001, then-Secretary of State Colin Powell stated, “Safety is one of our top priorities…but it can’t be at the expense of the mission.” In our May 2006 reported we noted that security concerns are particularly elevated in countries with significant Muslim populations, where the threat level for terrorism is rated as “critical” or “high” in 80 percent of posts. Security and budgetary concerns have led to the closure of publicly accessible facilities around the world, such as American Centers and Libraries. In Pakistan, for example, all American Centers have closed for security reasons; the last facility, in Islamabad, closed in February 2005. These same concerns have prevented establishing a U.S. presence elsewhere. As a result, embassies have had to find other venues for public diplomacy programs, and some activities have been moved onto embassy compounds, where precautions designed to improve security have had the ancillary effect of sending the message that the United States is unapproachable and distrustful, according to State officials. Concrete barriers and armed escorts contribute to this perception, as do requirements restricting visitors’ use of cell phones and pagers within the embassy. According to one official in Pakistan, visitors to the embassy’s reference library have declined to as few as one per day because many visitors feel humiliated by the embassy’s rigorous security procedures. Other public diplomacy programs have had to limit their publicity to reduce the risk of becoming a target. A recent joint USAID-State report concluded that “security concerns often require a ‘low profile’ approach during events, programs or other situations, which, in happier times, would have been able to generate considerable good will for the United States.” This constraint is particularly acute in Pakistan, where the embassy has had to reduce certain speaker and exchange programs. State has responded to security concerns and the loss of publicly accessible facilities through a variety of initiatives, including American Corners, which are centers that provide information about the United States, hosted in local institutions and staffed by local employees. According to State data, there are currently 365 American Corners throughout the world, including more than 200 in the Muslim world, with another 31 planned (more than 20 of which will be in the Muslim world). However, two of the posts we visited in October 2005 were having difficulty finding hosts for American Corners, as local institutions fear becoming terrorist targets. The Broadcasting Board of Governors has its own set of public diplomacy challenges, including trying to gain large audiences in priority markets while dealing with a disparate organizational structure that contains multiple discrete broadcasters (see fig. 4). As noted in the BBG’s strategic plan, “the diversity of the BBG—diverse organizations with different missions, different frameworks, and different constituencies—makes it a challenge to bring all the separate parts together in a more effective whole.” As we reported in July 2003, the Board hoped to address this key challenge through two primary means. First, it planned to treat the component parts of U.S. international broadcasting as a single system with the Board in the position of actively managing resources across broadcast entities to achieve common broadcast goals. Second, it intended to realign the BBG’s organizational structure to reinforce the Board’s role as CEO with a host of responsibilities, including taking the lead role in shaping the BBG’s overall strategic direction, setting expectations and standards, and creating the context for innovation and change. In addition, in 2006, we found that MBN, which received $79 million in funding in fiscal year 2006, faces several managerial and editorial challenges that may hinder the organization’s efforts to expand in their highly competitive market. While MBN has taken steps to improve its process of program review and evaluation, it has not yet implemented our recommendations to improve its system of internal control or develop a comprehensive staff training plan. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other members of the subcommittee may have at this time. For questions regarding this testimony, please contact Jess T. Ford at (202) 512-4128 or [email protected]. Individuals making key contributions to this statement include Audrey Solis, Assistant Director; Michael ten Kate; Eve Weisberg; Kate France Smiles; and Joe Carney. Foreign Assistance: Actions Needed to Better Assess the Impact of Agencies’ Marking and Publicizing Efforts. GAO-07-277. Washington, D.C.: Mar. 12, 2007. U.S. International Broadcasting: Management of Middle East Broadcasting Services Could Be Improved. GAO-06-762. Washington, D.C.: Aug. 4, 2006. Department of State: Staffing and Foreign Language Shortfalls Persist Despite Initiatives to Address Gaps. GAO-06-894. Washington, D.C.: Aug. 4, 2006. U.S. Public Diplomacy: State Department Efforts to Engage Muslim Audiences Lack Certain Communication Elements and Face Significant Challenges. GAO-06-535. Washington, D.C.: May 3, 2006. U.S. Public Diplomacy: State Department Efforts Lack Certain Communication Elements and Face Persistent Challenges. GAO-06-707T. Washington, D.C.: May 3, 2006. International Affairs: Information on U.S. Agencies’ Efforts to Address Islamic Extremism. GAO-05-852. Washington, D.C.: Sept. 16, 2005. U.S. Public Diplomacy: Interagency Coordination Efforts Hampered by the Lack of a National Communication Strategy. GAO-05-323. Washington, D.C.: April 4, 2005. U.S. Public Diplomacy: State Department and Broadcasting Board of Governors Expand Post- 9/11 Efforts but Challenges Remain. GAO-04- 1061T. Washington, D.C.: Aug. 23, 2004. U.S. Public Diplomacy: State Department and the Broadcasting Board of Governors Expand Efforts in the Middle East but Face Significant Challenges. GAO-04-435T. Washington, D.C.: Feb. 10, 2004. U.S. Public Diplomacy: State Department Expands Efforts but Faces Significant Challenges. GAO-03-951. Washington, D.C.: Sept. 4, 2003. U.S. International Broadcasting: New Strategic Approach Focuses on Reaching Large Audiences but Lacks Measurable Program Objectives. GAO-03-772. Washington, D.C.: July 15, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since the terrorist attacks of 9/11, polling data have generally shown that anti-Americanism has spread and deepened around the world, and several groups have concluded that this trend may have harmed U.S. interests in significant ways. U.S. public diplomacy activities undertaken by the State Department (State) and the Broadcasting Board of Governors (BBG), which totaled almost $1.5 billion in fiscal year 2006, are designed to counter such sentiments. Based on our prior reports, this testimony addresses (1) the negative consequences various groups have associated with rising anti-American sentiments; (2) strategic planning, coordination, and performance measurement issues affecting U.S. public diplomacy efforts; and (3) key challenges that hamper agency activities. Numerous experts, policymakers, and business leaders have identified various potential negative consequences of growing anti-Americanism. According to these sources, anti-Americanism may have a negative impact on American economic interests, the ability of the United States to pursue its foreign policy and military goals, and the security of Americans worldwide. Our reports and testimonies have highlighted the lack of a governmentwide communication strategy, as well as the need for an integrated State Department strategy, enhanced performance indicators for State and the BBG, and improvements in the BBG's audience research methodology. We also reported in March 2007 that U.S. foreign assistance activities were not being consistently publicized and branded, and we recommended that State help develop governmentwide guidance for marking and publicizing these efforts. State has responded to our recommendations and has taken actions to develop a more strategic approach and measure the effectiveness of its programs. Likewise, the BBG has adapted its strategic plan to include additional performance indicators and is beginning to address our recommendations to adopt management improvements at its Middle East Broadcasting Networks (MBN). Nevertheless, State and the BBG continue to face challenges in implementing public diplomacy and international broadcasting. State has shortages in staffing and language capabilities, and security issues continue to hamper overseas public diplomacy efforts. For example, in 2006 we reported that State continued to experience significant foreign language proficiency shortfalls, particularly at posts in the Muslim world. The BBG faces challenges in managing a disparate collection of broadcasters. Also, MBN faces several managerial challenges involving program review, internal control, and training.
The Academy is headed by a Superintendent. The Superintendent reports directly to the head of MARAD, the Maritime Administrator. MARAD, an agency of the Department, is responsible for overseeing and monitoring the Academy. A Deputy Superintendent and four Assistant Superintendents—Administration, Regimental Affairs, Academic Affairs, and Plans, Assessment, and Public Affairs—report directly to the Superintendent and are the principal officials responsible for carrying out the Academy’s operations. Academy components with respect to the issues discussed in this report include the following. The Department of Resource Management (DRM) provides bookkeeping, payroll, and other administrative support services for the Academy. Further, for most of fiscal years 2006 and 2007, the director of DRM was also the head of the Fiscal Control Office (FCO) NAFI. During fiscal years 2006 and 2007, the Director of DRM reported to the Academy’s Deputy Superintendent. When the position of Deputy Superintendent was not occupied, the Director of DRM reported directly to the Superintendent. The Department of Waterfront Activities operates the waterfront area of the Academy’s property, maintains the Kings Pointer and other training vessels, and provides training to midshipmen. Further, the Department of Waterfront Activities collaborates with two NAFIs on waterfront related activities (the Sail, Power and Crew Association, and the Global Maritime and Transportation School (GMATS)), and the Sailing Foundation, a private non-profit foundation. The Director of Waterfront Activities reported to the Deputy Superintendent, or when that position was not occupied, the Director reported directly to the Superintendent. The Department of Information Technology provides information technology services for all Academy operations and midshipmen. The Director of Information Technology reported to the Deputy Superintendent or when that position was not occupied, the Director reported directly to the Superintendent. The Department of Health Services provides medical and dental services to midshipmen. The Director reported to the Assistant Superintendent for Administration as well as the Deputy Superintendent or the Superintendent. An overview of the organizational relationships of the Department of Transportation, MARAD, selected components within the Academy, as well as the Academy’s affiliated NAFIs and foundations is provided in figure 1. The Academy carries out its mission and operations primarily using appropriated funds. The Academy’s 14 affiliated NAFIs operate using the proceeds from their own operations, rather than with appropriated funds. NAFIs are organizations that typically provide for the morale, welfare, and recreation (MWR) of government officers and employees. These are items and services that support the efforts of government employees and officers to carry out the government’s business by fulfilling their MWR needs. For example, tailoring, hair cuts, and laundry services provided by the Academy’s NAFIs are examples of MWR services that generally should not be paid from appropriated funds. In addition to the 12 MWR type NAFIs, the Academy has two other affiliated NAFIs. The Fiscal Control Office (FCO) provides bookkeeping, payroll, and other administrative support services and the Global Maritime and Transportation School (GMATS) provides training to other federal agencies and to the maritime industry. The activities of the FCO with respect to the issues discussed in this report include the following. The FCO was responsible for bookkeeping, payroll, and administrative support services for 12 of the other 13 NAFIs and also handled payroll for the Athletic Association. The Athletic Association handled its own bookkeeping, and the GMATS handled all of its own bookkeeping and payroll functions. The FCO was also responsible for the collection of all midshipmen fees for the Academy and the payment of amounts to other NAFIs, vendors and others from the fees collected. The FCO also collected funds from GMATS that were provided for the use and benefit of the Academy. Further, the FCO was responsible for maintaining books and records for “prior years’ reserves” from the excess of midshipmen fees collected over payments made as discussed in this report. The FCO maintained various commercial checking accounts for activities related to its collection and payment responsibilities for Academy funds. The functions of the FCO and DRM staff and managers were interchangeable. The manager of the FCO (the same individual as the head of DRM) reported on FCO matters to the Deputy Superintendent or when that position was not occupied, the FCO manager reported directly to the Superintendent. The Academy’s 14 affiliated NAFIs and 2 affiliated foundations are listed in table 1. Appendix II provides more detail on the relationships and financial activity between the Academy and its affiliated organizations. Table 2 shows the amount and sources of the Academy’s funding for fiscal years 2006 and 2007. Amounts received by the Academy for capital improvement, totaling $15.9 million and $13.8 million for fiscal years 2006 and 2007, respectively, were to be used for capital assets, including certain related expenses. The Academy’s payments to NAFIs and total expenses for fiscal years 2006 and 2007 are shown in Appendix III. The Academy’s payment activity with its 14 NAFIs was significant in relation to the Academy’s total expenses. For fiscal year 2006 Academy expenses of $55.7 million included $9.6 million to its affiliated NAFIs, representing over 17 percent of total Academy expenses. Similarly, for fiscal year 2007, Academy expenses of $62.0 million included $13.4 million to its NAFIs, representing over 21 percent of total Academy expenses. Payments to NAFIs were generally classified in the Academy’s financial records as contractual services; operations and maintenance; and gifts and bequests. The total amount of payments that the Academy made to its NAFIs in fiscal years 2006 and 2007 are shown in table 3. The Academy is to provide each midshipman with free tuition, room and board as well as limited medical and dental care. However, under MARAD regulations, the Academy requires each midshipman to pay fees for items or services generally of a personal nature (hereafter “goods or services” or “personal items”) each academic year. The Academy treats all fees collected as non-appropriated funds when the good or service is provided by a NAFI, such as services for laundry and haircuts that are provided by the Ship’s Services Store, or by a department of the Academy, such as Information Technology that provides internet services and personal computers to the midshipmen. The FCO collects all midshipmen fees on behalf of the Academy and also makes payments to vendors and others from the fees collected. For fiscal years 2006 and 2007, the FCO collected about $7 million in total midshipmen fees. In the 2007-2008 academic year, fees collected represented $15,560 per midshipmen over the course of a 4-year education and ranged in amount from $2,410 to $7,020, depending on class year. Details on midshipmen fee collections for fiscal years 2006 and 2007 are shown in table 4. Our review identified instances of improper and questionable sources and uses of funds by the Academy and its affiliated NAFIs, some of which violated laws, including the ADA. Specifically, we identified improper and questionable sources and uses of midshipmen fees and questionable financial activity associated with GMATS and other NAFIs. The improper and questionable activities and transactions that we identified demonstrate the Academy did not have assurance that it complied with applicable fund control requirements, including those in the ADA. Further, the Academy could not effectively carry out its important stewardship responsibilities with respect to maintaining accountability over the collection and use of funds, including assuring that funds were collected and used only for authorized purposes. As discussed in this report, the primary causes of these improper and questionable sources and uses of funds can be attributed to a weak control environment and the flawed design and implementation of internal controls at the Academy, including inadequate oversight and monitoring by the Academy and MARAD. MARAD regulations provide that the Academy can collect fees from all midshipmen to pay for “personal” goods and services. However, we found a number of improper and questionable activities concerning the Academy’s and its affiliated NAFIs’ collection and use of midshipmen fees. Specifically, we identified improper and questionable midshipmen fee- related transactions with respect to: (1) collections for goods and services that were not the midshipmen’s responsibility, (2) collected amounts that exceeded the actual expense to the Academy for the goods or services provided to the midshipmen, and (3) the use of accumulated fee reserves for questionable purposes. We also identified improper and questionable uses of the fees collected. For fiscal years 2006 and 2007, the Academy collected fees of approximately $7 million from midshipmen. We nonstatistically selected four midshipmen fee categories for review. We found that the total fees collected for these four midshipmen fee categories of about $1.5 million were questionable because they did not appear to be items of a personal nature to each midshipman, but rather, expenses that would normally be paid by the Academy from appropriated funds. Specifically, over the 2006 and 2007 fiscal years, we found that the Academy collected questionable midshipmen fees for waterfront activities, processing services, information technology services, and medical services. We also identified potentially improper payments from these questionable fee collections totaling approximately $1.2 million that were paid to NAFIs and vendors, including the Sail, Power and Crew Association (SP&C) for waterfront activities, the FCO for processing services, and vendors for information technology services. There may be other improper and questionable collections and uses of midshipmen fees that our review did not identify. To the extent these collections and the uses of these funds improperly covered Academy expenses that are chargeable to Academy appropriations, the Academy improperly augmented its appropriated funds, which may have resulted in violations of the ADA, 31 U.S.C. §1341(a), by incurring obligations or expenditures in excess of available appropriations. We did not independently assess the amount of such improper augmentations. Waterfront activities: We found that for the 2006 and 2007 fiscal years, a total of $318,187 was collected from all midshipmen for these activities. All fees collected for waterfront activities do not represent personal midshipmen services that qualify as chargeable to all midshipmen because not all midshipmen used the Academy’s waterfront facilities. For example, such waterfront activities as sailing competitions, varsity water sport teams, and power vessel training are elective activities in the Academy’s curriculum for midshipmen. Processing services: We found that for the 2006 and 2007 fiscal years, the FCO collected $65,712 from the midshipmen for FCO’s processing services. The FCO retained all processing fees without adequate supporting documentation for how the amount collected was determined, why a processing fee was due, or why the amount should be funded by collections from all midshipmen. Processing expenses incurred by FCO represent administrative expenses. The administrative expenses may be attributable to services provided by FCO to midshipmen. However, without adequate supporting documentation, we could not make such a determination. Information technology services: We found that for the 2006 and 2007 fiscal years, the Academy collected $839,309 from the midshipmen for information technology services. Such services are not all “personal” to the Academy’s midshipmen. However, the Academy used these fees to support operations of the Department of Information Technology that are otherwise funded by Academy appropriations. Medical services: We found that for the 2006 and 2007 fiscal years the Academy used $2,293,884 in appropriated funds to pay for medical and dental services for midshipmen under a contractual agreement with a local hospital. However, it also collected $288,813 in midshipmen fees for the same services. Academy officials did not provide us with any support for how the annual amounts assessed midshipmen for contracted hospital services were determined. The midshipmen fees collected were, according to Academy officials, held by the FCO in a reserve for “rainy-day” purposes. We were told by the same officials that the fees collected from the midshipmen represented the amount the Academy believed to be necessary to cover possible rate adjustments under the contract with the hospital. We reviewed the payments by the Academy to the hospital for the years 2006 and 2007 and found that the amounts paid based on actual usage were less than the estimated expense per the contract. In addition to assessing and collecting fees unrelated to goods or services that are personal to all midshipmen, we found that the Academy collected fees from midshipmen that exceeded its actual expenses for providing goods or services to its midshipmen. For example, the Academy collected $2,400 from each plebe midshipman during fiscal years 2006 and 2007 for computers, including a printer and peripheral equipment. For the 2006 and 2007 fiscal years, available records show the Academy collected a total of $1,278,266 from the midshipmen in fees for these personal computers and related equipment. Over the same period, the Academy paid a total of $863,859 to vendors for computers and related equipment—leaving an excess of $414,407 in collections over the related expenses. Thus, the amount collected from the midshipmen for computers represented 148 percent of the actual expense to the Academy for these items over a 2-year period. Academy officials told us they were aware of these excessive collections, but did not take action to refund excess collections or reduce fees charged the midshipmen for this equipment, but instead chose to utilize the excess collections to support its operations. The Academy, using the FCO, had inappropriately used “off-book” reserves accumulated from the excess of midshipmen fees collected over payments made to vendors and others for goods and services. For example, a “Superintendent’s Reserve” was created and used to make discretionary payments authorized by the Academy Superintendent. Our review of available records determined that for the 3 years ended September 30, 2008, deposits to the “off-book” reserves totaled $1,325,669 and payments and transfers from the account totaled $605,347, with a balance of $999,315 at September 30, 2008. We found no evidence that the $605,347 in payments from these “off-book” reserves were for purposes consistent with the fee collections. Consequently, we consider the entire $605,347 in payments from these reserves as questionable and, to the extent used to cover Academy expenses, constitute an improper augmentation of the Academy’s appropriations, which result in violations of the ADA, 31 U.S.C. §1341(a) if the obligations incurred exceed available appropriations. For example, use of the excess fee amounts to support the Academy’s Department of Information Technology constitutes an improper augmentation of the Academy’s appropriation for its operations. We did not independently assess the amount of such improper augmentations. As summarized in table 5, and briefly discussed in the text that follows the table, our analysis of FCO’s records of 10 payments selected on a nonstatistical basis illustrates the types of questionable payments made from FCO’s accumulation of excess midshipmen fees from prior years’ midshipmen fee reserves during the 3-year period ending September 30, 2008, on behalf of the Academy. 1. Blackbaud accounting system for the FCO: This system is used by FCO to provide bookkeeping services for the 12 NAFIs for which the FCO provides such service. The total consulting fee and installation cost for the system, per a February 2008 contract, between the vendor and the FCO was $75,000. As a NAFI system, the entire cost of the new system should have been funded using non-appropriated funds. Through January 2009, we found that payments of $51,173, including the $4,965 payment we reviewed, were made to the vendor using midshipmen fees. An additional $10,581 was paid using Academy appropriated funds, $5,963 was paid using FCO funds, and $5,963 was paid using GMATS funds. Academy officials said that midshipmen fees as well as Academy appropriated funds were used to partially fund the system because other funding was not available at the time to pay the Blackbaud invoices. 2. Payroll costs for Academy employee: An Academy official said that the payment was to transfer funds from the midshipmen fee account to the FCO’s account to cover the payroll for the upcoming fiscal year for an Academy employee that reported directly to the Academy’s academic dean. 3. Settlement of complaint: The payment support consisted of a copy of the check stub with the notation “Settlement fee for EEO complaint.” No documentation was provided to us to support why such a payment should be funded using midshipmen fees. 4. Donation to start-up Museum NAFI: The payment support consisted of a copy of the payment stub with the notation “To start-up Museum NAFI.” No documentation was provided to us on how the payment related to fees collected from midshipmen. 5. Payroll for Regimental Morale Fund Association NAFI: The support for this payment was a check stub with the explanation: “To cover the amount due to FCO for Morale Fund payroll according to a June 30, 2006 FCO analysis.” No information was provided as to why payroll of the Regimental Morale Fund Association NAFI would be paid from midshipmen fees. We were told that of 45 employees of the Morale Fund, 25 were paid from non-appropriated funds, 19 were paid from appropriated funds, and 1 was paid with a combination of appropriated and non-appropriated funds. The payment support indicates that the payment is for adjustments to the payroll costs for several of the persons paid from non-appropriated funds. 6. Education program on alcohol: Payment was for an educational program for the midshipmen on alcohol. Academy officials did not provide any explanation as to why this item of expense was not considered as an ordinary and necessary expense of the Academy payable from appropriated funds. 7. Weight control program: Payment was for “At Work Series,” a Weight Watchers International program. However, we were provided with no information on either why this item was not considered a personal expense of the midshipmen in the weight control program, or why the item was not considered a necessary expense of the Academy payable from appropriated funds. 8. Transfer to current year’s midshipmen fees account: The only documentation supporting this payment was a copy of the payment voucher with the explanation “transfer prior year computer money to current.” An FCO official told us that the payment was to transfer prior year midshipmen fees for computer services—excess of collections over payments made for goods and services—to the current years midshipmen fees account to be used to pay for numerous invoices to the Academy from a provider of information technology services. Academy officials did not provide any information on why expenses payable from the Academy’s appropriated funds would be paid from midshipmen fees collected. 9. Tire chain system for Academy ambulance: Academy officials did not provide us with any information on why this item was not considered a necessary expense of the Academy payable from appropriated funds, rather than from funds collected through midshipmen fees. 10. Computer equipment lease: The $71,833 in midshipmen fee reserves was paid toward a $106,217 installment on a 3-year computer equipment lease (under a “lease to purchase agreement”). The balance of the installment payment was paid with current year midshipmen fees. Payments totaled $318,651 under this agreement and the amount funded with prior years’ midshipmen fees was $178,050 (including the $71,833 above) and $140,601 was funded with current year midshipmen fees over 3 fiscal years. An Academy official told us that prior years’ and current year’s midshipmen fees were used for these payments because “the Academy did not have sufficient appropriated funds to dedicate to this purchase.” Academy officials did not provide any information on why amounts payable from the Academy’s appropriated funds would be paid, in part, from midshipmen fees collected. The Academy did not provide us with any information as to why excess fees collected from all midshipmen and transferred to prior years’ reserves were considered an appropriate source of funds for any of these payments. We found that the Academy (1) improperly entered into sole-source agreements with GMATS to provide training services to other federal agencies and (2) inappropriately accepted and used GMATS funds. In addition, we found other improper and questionable transactions, including the Academy’s obligating and transferring appropriated funds to the FCO in order to preserve or “park” the funds for future use, and the Athletic Association NAFI’s retention of fees paid to this NAFI for use of the Academy’s property. During fiscal years 2006 and 2007, the Academy improperly entered into over $6 million in agreements with GMATS to provide training services to other federal agencies on a non-competitive basis by GMATS. The Academy accepted interagency orders under the Economy Act as legal authority for its use of sole-source procurements. Based on our review of the transactions between the Academy and GMATS, we concluded that the Academy’s non-competitive awards to GMATS and the lack of proper contractual agreements under the Federal Acquisition Regulation may be improper procurements. For example, the Department provided us with no documentation to support a legitimate justification for the Academy’s non-competitive awards to GMATS. Although the services were provided by GMATS to the Academy (that, in turn, provided the services to other federal agencies under the Economy Act) under what likely constitute improper non-competitive contracts, the Department did not provide us with information supporting its reimbursements to GMATS of approximately $6 million for its costs under these agreements. During 2006 and 2007, the Academy also received funds from the GMATS NAFI and directed the GMATS NAFI to make payments on the Academy’s behalf without clear legal authority. Specifically, in fiscal years 2006 and 2007, we found that the FCO received $193,022 and $186,113, respectively, which were described in GMATS records as annual contributions for the benefit of the Academy of 5 percent of GMATS’s gross profits. The records of GMATS further described these amounts as funds to be utilized by the Academy for incremental costs incurred from GMATS’s use of the Academy’s campus facilities. The Academy did not have records or analysis of whether the amount received bore any relationship to estimated or actual costs the Academy may have incurred. We also found that GMATS made payments to the FCO that were held in a reserve for subsequent disbursement at the direction of Academy officials. We were told that the payments to the FCO were to compensate the Academy for various items such as use of the engineering lab; use of the ship’s bridge simulator, a specialized training device; and use of a professor’s time—all for GMATS business. According to GMATS records, the amount paid by GMATS for these items totaled $52,124 in 2006; Academy officials told us that this practice was discontinued in 2007. In February 2008, the Administrator reported to the Deputy Secretary of Transportation that the use of these reserves may have violated the ADA’s prohibition on obligating or expending amounts in excess of available appropriations. We found that the Academy also may have violated the “Miscellaneous Receipts” statute, 31 U.S.C. § 3302(b), by failing to immediately deposit all the funds received from GMATS into the general fund of the U.S. Treasury. Further, the use of the Superintendents Reserve fund for official Academy expenses appears to constitute an improper augmentation of the Academy’s appropriated funds, which results in violations of the ADA, 31 U.S.C. §1341(a), if the obligations incurred exceed available appropriations. We found that the Academy improperly entered into agreements with the FCO NAFI to prevent a cumulative total of almost $389,000 in annual appropriations from expiring (“parking funds”) at the ends of fiscal years 2006 and 2007. The Academy later transferred the $389,000 to the FCO for future use rather than allowing the funding to expire in accordance with the appropriation account closing law, 31 U.S.C. §1553. For example, one agreement for $200,000 stated that the purpose was to provide accounts payable services to the Academy during fiscal year 2007 year-end. These agreements were improper because there was no underlying economic substance to them and there was not any description of deliverables under the agreement, such as a statement of work. We were told that the agreement with FCO was entered into to reserve funds at the end of the year that would otherwise have expired. We also found that, of the $389,000 received from the Academy, FCO used approximately $175,000 to subsequently pay for items of expense and, at the direction of MARAD, returned $214,000 to the Academy in March 2008. In addition, we found that in October 2007, FCO transferred $270,000 from this reserve to the FCO’s payroll checking account for what FCO officials described as a “payroll loan”. The loan was repaid in full on December 11, 2007. However, Academy officials told us they did not have any support and that their inquiries on this issue had not produced an explanation as to why Academy resources would be used for a loan to the FCO. We were told by FCO officials that the transactions were based on a need for the funds as determined by staff and that no formal loan documents or other written supporting documentation existed. The Secretary of Transportation reported to the President, the Congress, and the Comptroller General in March 2009, numerous unidentified transactions in fiscal years 2005, 2006, and 2007, totaling $397,740, as violations of section 1341(a)(1)(B) of the ADA, which prohibits the involvement of the government in a contract or obligation before an appropriation is made. As discussed above, the Academy recorded obligations against its fixed-year appropriated funds to reflect transfers to the FCO, via a MARAD “Form 949.” MARAD officials investigated transactions occurring in fiscal years 2005, 2006, and 2007 to determine if these transfers constituted illegal “parking” of fiscal year appropriations and violations of the ADA. They found that the executed forms, in a net amount totaling $397,740, did not represent bona fide needs of the Academy for specific goods or services at the time they were made and, therefore, did not reflect valid obligations. Recording invalid obligations against current fixed-year appropriations for the purpose of using the appropriations in a subsequent year constitutes illegal parking of the funds. We found questionable billing and payment transactions related to the use of the Academy’s training ship and other Academy boats. Specifically, we found that the SP&C NAFI, and not the Academy, billed the user of the Kings Pointer, GMATS. The GMATS NAFI used the Academy’s 224-foot training vessel, the Kings Pointer, as well as other Academy vessels, to provide training and education to other organizations or individuals from the marine community during fiscal years 2006 and 2007. GMATS remitted payments to the Academy for the use of its vessels, for which the Academy then remitted a portion of the funds to another Academy NAFI (the Sail, Power and Crew Association) and retained a portion. Available records show that of the $366,906 the Academy received for use of the Kings Pointer during fiscal years 2006 and 2007, the Academy made payments totaling $217,848 to SP&C. The portion of fees the Academy received that were remitted to the SP&C varied from about 50 percent of receipts to over 70 percent based on directions received from SP&C. However, no documentation was provided to support the amount or percentages of these Academy payments to the SP&C. Further, we found that the Academy may have violated the “Miscellaneous Receipts” statute, 31 U.S.C. §3302(b), by failing to immediately deposit all the funds received from GMATS into the general fund of the U.S. Treasury. Finally, without adequate supporting documentation, the entire $217,848 in Academy payments to the SP&C related to the outside use of the Kings Pointer during fiscal years 2006 and 2007 is questionable. We found that the Athletic Association NAFI operated camps and clinics on Academy property and that the Athletic Association NAFI, and not the Academy, was compensated for the use of government property. We also found that instructors who were compensated in part by the Academy participated in these commercial activities on Academy property in return for a share of the proceeds from those activities. During fiscal year 2008, the Athletic Association collected $94,077 in fees for conducting athletic camps and clinics. Of the funds collected, $72,847 was paid to instructors, $19,327 was retained by the Athletic Association as a facility fee, and $1,903 was either retained by the Athletic Association or used for other payments not identified in our review. According to Athletic Association staff, $62,122 of the $72,847 paid to instructors were payments for fee-sharing arrangements with 6 instructors, 5 of whom are current or former Academy employees. Further, an Academy official described the Athletic Association’s retention of $19,327 as being essentially the net profit from the camps and clinics that was retained as a facility fee. However, no portion of the $19,327 was paid to the Academy for the use of the Academy’s facilities. Academy payroll activities contributed to three separate violations of the ADA. First, the Academy incurred approximately $525,000 more for salaries and benefits in fiscal year 2006 than the $23,512,000 appropriated for its salaries and benefits. The payments were for performance awards that Academy personnel earned in fiscal year 2006 that the Academy erroneously charged against fiscal year 2007 appropriations. Academy officials told us the amounts could not be corrected with prior year’s funds because the Academy lacked a sufficient unobligated balance in its fiscal year 2006 salaries and benefits appropriation to transfer the charge from the fiscal year 2007 appropriations. This resulted in a violation of the ADA, which was included in the Secretary’s reports of March 9, 2009, to the President and the Congress that included multiple ADA violations. Second, in March 2009 the Secretary of the Department of Transportation reported to the President and the Congress that the Department violated section 1342 of the ADA, which prohibits the acceptance of voluntary services and the employment of personal services. Specifically, it determined the Academy paid over $4 million in both fiscal years 2006 and 2007 under agreements with the FCO for illegal personal services from the Academy’s NAFIs that were provided by as many as 90 employees who performed exclusively Academy functions, and reported to Academy supervisors. These expenses were recorded as contracted services in the Academy’s books and records. The Secretary concluded that many agreements called for the employment of personal services, which are characterized as an employee-employer relationship. For example, an agreement between the FCO and the Academy for information technology services, dated November 14, 2006, and as modified through August 16, 2007, provided that the Academy would pay FCO $941,681 during fiscal year 2007 for services described in the agreement as professional services to the Department of Information Technology, and administrative support services. A supporting schedule to the agreement detailed the annual salaries for 11 staff by name, the general schedule (GS) equivalent grade for each staff except 1, the amounts for the salaries of 2 NAFI contractors, and amounts for fringe benefits and cost of living adjustments. The information technology agreement covered all the staffing needs for the Academy’s Department of Information Technology except for one individual. Thus, through this agreement the Academy paid 100 percent of the salary and benefit costs for all 11 FCO staff and the full cost of the NAFI contractors listed. Each of the staff covered by the agreement with the FCO performed Academy functions under the supervision of a government employee, but the expense for their services was classified as contract services and not as payroll. A similar agreement for services related to athletics, dated November 14, 2006 as modified through August 16, 2007, between the FCO and the Academy provided that the Academy would pay the FCO $481,132 during fiscal year 2007 for services described in the agreement as professional services to the Academy’s Department of Athletics. The supporting schedule to the agreement detailed the annual salary amount for 32 staff by name and amounts for fringe benefits and cost of living adjustments. The Academy was responsible for 40 to 100 percent of the total cost by individual. These expenses were also classified as contract services and not as payroll. We asked Academy and MARAD officials for an analysis supporting the portion of the payroll that was assigned to the Academy under the agreement for information technology and athletics services. We were told by the Academy’s Assistant CFO and the MARAD CFO that there was no overall analysis that would support the distribution of amounts between the Academy’s appropriated funds and NAFI expense for the payroll covered by any of the agreements between the Academy and the FCO. In addition to the issues discussed above, the Secretary reported in the March 2009 report to the President and the Congress that the Academy also violated section 1342 of the ADA over the past 4 years by employing about 50 adjunct professors under illegal personal services contracts valued at $2.4 million. The Academy funded these services out of the Academy’s fiscal year appropriations that were unavailable for salaries. We found that a weak overall control environment and the flawed design and implementation of internal controls were the root causes of the Academy’s inability to prevent, or effectively detect, the numerous instances of improper and questionable sources and uses of funds discussed previously. Specifically, we found the Academy lacked an accountability structure that clearly defined organizational roles and responsibilities; policies and procedures for carrying out its financial stewardship responsibilities; an oversight and monitoring process; and periodic, comprehensive financial reporting. We found that there was little evidence of awareness or support for strong internal control and accountability across the Academy at all levels, and risks, such as those that flow from a lack of clear organizational roles and responsibilities and from significant activities with affiliated organizations, that were not addressed by Academy management. The internal control weaknesses we identified were systemic and could have been identified in a timely manner had Academy and MARAD management had in place a more effective oversight and monitoring regimen. Further, we found that the Academy did not routinely prepare financial reports and information for use by internal and external users that could have helped to identify the improper and questionable sources and uses of funds. An entity’s organizational accountability structure provides the framework within which its activities for achieving its mission objectives are planned, executed, and controlled. The process of identifying and analyzing risk is a critical component of effective internal control. GAO’s Internal Control Implementation Tool provides that management should periodically evaluate its organizational structure and the risks posed by its reliance on related parties and the significance and complexity of the activities it undertakes. Further, as discussed previously, one of the primary requirements of the ADA is to establish accountability for the obligations and expenditure of federal funds. In carrying out its mission operations, the Academy has close relationships with its 14 affiliated NAFIs and 2 foundations. Therefore, it is important for the Academy to recognize and appropriately manage the risks posed by the organizational and transactional relationship between it and its NAFIs. These risks and the volume of activities between the Academy and the NAFIs should have signaled to Academy management that there was a need for strong oversight and accountability over these activities and relationships. Our review indicated that 11 NAFIs do not have approved governing documents, such as charters and by-laws, and the remaining 3 NAFIs with approved governing documents perform some duties and functions which fall outside of the narrow scope of authority set out in those documents. Further, the relationships between the Academy and its 14 NAFIs are complex, and we found that they often involve numerous financial transactions, the business purpose of which is frequently not readily apparent. As such, it is not always clear where the respective responsibilities of the Academy and its NAFIs begin and end. In addition, we found that the Academy did not address the risks posed by its organizational structure, including not establishing a system of checks and balances over the sources and uses of funds with its NAFIs. Further, the inappropriate practices and improper use of Academy resources by Academy managers that we found occurred and continued for years. For example, the collection of questionable midshipmen fees for hospital services, among others: the accumulation of excess fees “off-books” in commercial bank accounts for discretionary or “rainy day” purposes: and the preserving or “parking” of Academy appropriated funds with the FCO, all occurred within a culture of lax accountability involving both Academy and NAFI management that was accepting of these types of activities. Further, the risks posed by the Academy’s relationship with its NAFIs led to improper transactions. For example, as previously discussed in this report, GMATS provided a percentage of its profits each year to the FCO for the benefit and use of the Academy. However, there was no agreement covering these transactions. We also found insufficient review of the Academy’s use of GMATS funds and no indication that there was consideration of the legality or appropriateness of those transactions. There was also insufficient consideration of the legal and internal control ramifications of Academy agreements with the FCO for personal services. As previously discussed in this report, the services provided by these agreements totaled over $4 million per year, which represented about 17 percent of the annual Academy appropriation for salaries and benefits. Also, the Academy did not provide us with information on the authority for establishing the prior years’ reserves, or the rules, policies, and procedures for operation of the reserves, including, for example, specifics on authorized uses of the funds. Standards for Internal Control in the Federal Government provides that for an agency to run and control its operations, it must have relevant, reliable information, both financial and non-financial. For example, those charged with governance should have timely information on the amount and sources of the Academy’s resources. This includes information on the Academy’s appropriated funds as well as funds it receives from other sources, such as midshipmen fees and receipts from affiliated organizations for goods and services provided to them by the Academy. If such information had been produced routinely by the Academy and made available to decision makers and those charged with governance, they may have identified red flags that signaled the need for attention. For example, financial reports for the Academy that provided detailed financial information may have signaled the need for inquiry as to the reasons for such things as the Academy annually paying approximately $4 million from appropriated funds for contracted personal services and reflecting such expenses as other than payroll in its books and records. We found that for fiscal years 2006 and 2007, the Academy did not routinely prepare financial reports separately presenting information on all its financial activities, including its sources and uses of funds, and amounts due to and from others. The Academy’s activities are included in MARAD’s financial reports, but its activity and balances are not separately identified. As a result, users of MARAD’s financial reports could not readily identify the sources and uses of funds attributable to the Academy or the amounts due to and from others by the Academy. Such information is typical in financial reports and statements. We found that the Academy prepared and reported selected financial information from time to time for use by its managers. However, Academy officials told us that such reports were sporadic, unreliable, and were not used for decision making. For example, the head of the Academy’s Department of Information Technology told us that, among other things, the expense and obligation information that he received was typically not timely and that the information provided to him was inaccurate and could not be relied upon. An Assistant Superintendent told us that he did not typically receive financial information on the significant business activities that he was responsible for, including a $6 million, 5-year contract for medical services with a local hospital. We also found that comprehensive financial reports on Academy activities and balances were not routinely prepared and made available for review by Academy or MARAD management. The Academy did not fully comply with a legal requirement to annually provide the Congress with a statement of the purpose and amount of all expenditures and receipts. We reviewed the reports submitted to the Congress for fiscal year 2008 and found that the reports included some, but not all expenditure and receipt information. For example, the reports included information on gifts and bequests received and tuition receipts by GMATS. However, the reports did not include any information on gifts and bequests received by the Academy and paid to others, receipts and expenditures of GMATS, or midshipmen fees collected or expenditures made from the fees collected by FCO. The inquiry and analysis necessary to prepare and file a complete report may have provided information to address the issues we discussed previously in this report involving GMATS and midshipmen fees. The MARAD CFO told us in August 2008 that the Academy would take actions to include information for all NAFIs and midshipmen fee activities in future reports to the Congress. However, we were subsequently told that such information was not included in the May 2009 report that accompanied the Department’s budget justification document because the necessary analysis had not been completed. MARAD officials subsequently told us that they would submit an amended report with this data. Further, we found that the Department did not comply with a 1994 legal requirement to annually report to Congress any changes in midshipmen fee assessments for “any item or service” in comparison with fees assessed in 1994. We identified changes in the nature and the amount of fees collected by the Academy from 1994 forward that were not reported by the Department to the Congress. A MARAD official told us that changes in the fees had occurred since 1994, but he did not know why the reports had not been filed. Had changes in midshipmen fees over the last 15 years been reported to the Congress, red flags may have been raised about the increases and the total amount of midshipmen fees being charged that could have been addressed by those charged with oversight and monitoring. Further, a systematic process to identify changes in midshipmen fees from year to year and to report the changes to those officials charged with reporting to the Congress on these matters may have functioned as an important early detection control. Standards for Internal Control in Federal Government provides that an entity’s control environment should include management’s framework for monitoring program operations to ensure its objectives are achieved. However, the absence of effective oversight by MARAD contributed directly to the opportunity for improper practices and questionable activities and payments and for the continuation of such practices over long periods of time without detection. Our review found a number of instances in which effective oversight procedures could have helped identify and address the Academy fund control deficiencies we discussed previously. For example, we found that MARAD did not have or did not enforce basic prevention and detection controls such as requiring periodic financial reports of Academy’s sources and uses of funds or performing high level analytical reviews of reported revenues and expense of the Academy. Also, MARAD did not enforce the existing policies for monitoring of NAFI activities, such as the requirements for submission and review of annual audited financial statements for each NAFI. We found a wide range of activities between the Academy and its 14 NAFIs that lacked transparency and for which there was insufficient review and consideration by Academy and MARAD officials. Some of these activities were reflected in the Academy’s books and records, and some were apparent only from looking beyond the form of the transaction to find underlying cross subsidies and barter arrangements. For example, we found there were no independent reviews, either by the Academy, by MARAD officials, or by both, conducted before entering into agreements for training services that were provided to external federal agencies by GMATS and not the Academy. Our analysis of costs charged against the Academy’s no-year capital improvement appropriation identified some costs that were recorded as repairs and maintenance expenses that appeared to represent capitalizable assets. For example, under the no-year capital improvement appropriation, we identified $779,731 of recorded expenses in 2007 for payments to one vendor for items of furniture and equipment. The MARAD CFO told us that he was aware that timely reviews were not performed of the Academy’s expenses in either 2006 or 2007. Such reviews are important because of the large amount of capital improvement projects at the Academy and could have identified items that should have been capitalized with necessary adjustments made before the books were closed for the year and financial and budgetary reports prepared. The Academy received $15.9 million and $13.8 million for fiscal years 2006 and 2007, respectively, in no-year appropriations for its capital improvement projects. At our request MARAD reviewed selected categories of expenses for fiscal years 2006 and 2007 and identified $3,380,528 for 2006, and $1,695,670 for 2007 (including the $779,731 described above) that should have been capitalized as assets. The payments were appropriately funded using the Academy’s no-year appropriations. These officials told us that adjustments to correct for the errors of $5,076,198 were made during fiscal year 2009. MARAD also identified additional expenses of $1,459,103 and $1,972,622 for 2006 and 2007, respectively, which were improperly funded with the no-year capital improvement appropriation. In June 2009, these officials told us that adjustment to correct for these errors would also be considered before the close of fiscal year 2009 in conjunction with the other matters that we identified in this report that may require adjustment to the Academy’s appropriation accounts. The Academy lacked adequate procedures and controls to maintain effective accountability over the amounts charged to midshipmen and to ensure that midshipmen fees collected were used only for their intended purpose—- covering the costs of goods or services provided to the midshipmen that are generally of a personal nature. The Academy has no policy on what midshipmen fees activity and balances should be reflected in its official records and reports or what is properly excludable. As discussed previously, these deficiencies resulted in the Academy’s charging midshipmen fees for items that were not of a personal nature and in amounts that were in excess of the related expenses for the goods or services. Further, the treatment of midshipmen fee activities “off-book” did not provide necessary accountability for the collection and use of the fees. We also found that the FCO’s records did not consistently support the activity in the midshipmen fee accounts. DRM, FCO, and Academy staff and officials, as well as the Academy’s Assistant CFO informed us that the support we requested for specific transactions could not be located, including memorandums from staff and officials describing or authorizing fees or supporting amounts collected or paid. We were also told that the activities reflected in the bank accounts that held the prior years’ reserves were not reconciled to FCO records for any month in the 3 years covered by our review. We found that reports provided to us for monthly activity— increases and decreases—in reserve balances for each of the separate categories did not always reflect complete information on the sources and uses of the reserves. For example, we found that the FCO’s September 2007 activity report for the prior years’ reserves account included transactions that reduced the reserve account balances for 4 of the 8 reserve sub accounts by a total of $100,000, but did not identify the payee or other information on the use of funds. FCO staff told us that this difference was due to an error. The Academy’s Assistant CFO told us that no further documentation or explanation for this activity was available. As indicated, the FCO was responsible for paying bills using midshipmen fees that were presented for processing by officials with responsibility for Academy departments such as Health Services and Information Technology as well as requests for payments from the Academy’s Superintendent and other officials. However, we found that FCO staff did not appropriately question the items presented for payment to determine the sufficiency of the support for the payment that was requested. The Academy entered into agreements to provide training services to other federal agencies that were provided by GMATS and not the Academy. Federal accounting standards provide that an entity should recognize revenue and expenses when the entity provides goods or services to another entity in an exchange, such as by contracting to provide training to another entity. However, we found that the Academy recognized revenue and expenses even though it was not a party to the exchange of services and resources. These improperly recognized revenues and expenses were reflected in MARAD’s budget and financial reports. Further, the Academy paid GMATS for the funds received from other federal agencies when reviewing and approving officials did not have proper support for the payments. A summary of the revenue and expenses that the Academy recorded for transactions between GMATS and other federal agencies is shown in table 7. In addition, the Academy did not provide proper accountability for the acceptance and use of annual contributions from GMATS by using another NAFI, the FCO, as recipient of the funds on behalf of the Academy. Neither the receipt nor the use of those funds was reflected in the Academy’s accounting records. Further, the amounts accepted for the Academy by the FCO from GMATS were not supported by appropriately detailed billings or analysis from the Academy to GMATS. Instead, the amounts of contributions paid from GMATS to the Academy were unilaterally determined by GMATS and were paid to the FCO and, at times, directly to vendors on behalf of the Academy. Federal accounting standards provide that entities should establish accruals only for amounts expected to be paid as a result of transactions or events that have already occurred. Further, federal appropriation law provides that such accruals, which are legal obligations, must represent a bona fide need of the agency for the fiscal year in which the accrual is recognized and that there must be appropriations available to charge. However, we found the Academy inappropriately recorded over $389,000 during fiscal years 2006 and 2007. Academy officials accomplished these transactions by preparing agreements between the Academy and FCO using the Department’s form MA 949, Supply, Equipment or Service Order/Contract. We also found unauthorized and unsupported loans to the FCO from the Academy funds that were improperly “parked” with the FCO. The Academy lacks adequate controls to prevent these improper transactions. We found the Academy lacked policies and procedures and adequate internal controls over the use of Academy training vessels. For example, controls did not specify required documentation or approval for payments with respect to the GMATS’s use of the Academy’s Kings Pointer during fiscal years 2006 and 2007, and the related transfer of funds to the SP&C NAFI. GMATS would pay the Academy for the full amounts billed by SP&C. However, the Academy would pay a portion of the funds received from GMATS to the SP&C. Academy payments to the SP&C for the use of the Kings Pointer, totaling $217,848 for the 2-year period of our review, were questionable in that (1) they were determined on a case-by-case basis by the SP&C management and (2) no supporting documentation was provided for these payments. We also found that the usage rates for use of the Academy’s training vessels was not supported and not based on consideration of current costs of operation. Billings to others for the use of government-owned property should be made by the government agency, in this case the Academy, that owns the property. The SP&C’s billing to others for the use of Academy-owned vessels and directing how much of the usage fees the FCO should remit to the Academy and to itself demonstrates how intertwined the activities and personnel of the Academy’s Waterfront Department were with those of the SP&C. Further, these activities, along with the Academy’s payment of funds to the SP&C without sufficient support for those payments, illustrates the lack of control over the source and use of the Academy’s financial resources. We were also told that the underlying study and analysis to determine hourly usage fees charged for Academy marine asset use during fiscal years 2006 and 2007 was performed in 1996 or 1997. However, we were told that supporting documentation was not retained either for the initial rate study or for the rates in the updated 2004 and 2008 rate booklets. We also found that the hourly rates per the 2008 rate booklet did not change from those in the 2004 rate booklet and had not changed from those used in 1996-1997. Consequently, the Academy has no assurance that the usage fees cover the full cost of operating the Kings Pointer and other Academy- owned boats. Fees for the use of government-owned property should be the property of the agency that holds it, in this case the Academy. However, we found the Academy and its Athletic Association NAFI lacked policies and procedures and other internal controls to properly account for the uses of fees collected by the Athletic Association from conducting athletic camps and clinics using the Academy’s athletic facilities. We found that the lack of controls over Academy payroll activities resulted in the over expenditure of payroll in relation to appropriations and arrangements for illegal personal services. Also, for fiscal years 2006 and 2007, approximately half of the Academy’s annual appropriations were designated for payroll; however, we found that internal controls over payroll were inadequate and did not reflect consideration of the limits on annual appropriations or the risks posed by errors or weaknesses in the administration of payroll activities. For example, Academy internal controls did not prevent improper payroll- related transactions that violated the ADA. Specifically, MARAD stated that challenges in working with MARAD’s own payroll process and systems contributed to delays in determining actual payroll expenses for Academy employees. The payroll for these federal employees is processed by the Academy’s DRM using MARAD’s existing arrangement with another federal agency as the payroll servicer. Further, the Academy used NAFI employees performing work for the Academy under Academy employees’ supervision to assist in carrying out Academy mission functions. The FCO and other NAFIs would hire staff as employees of their own organizations and then contract with the Academy for a fee, which the NAFIs then used to pay the payroll and related expenses of the NAFI staff. Annually, the Academy would execute agreements with the NAFIs to provide the Academy with services using one of the Department’s standard forms designed for use with external parties (MA 949, Supply, Equipment or Service Order/Contract). These expenses were recorded as contracted services in the Academy’s books and records. There was insufficient consideration by Academy officials of the legal and internal control ramifications of these personal services agreements. The Administrator reported to the Deputy Secretary in his February 2008 report that the relationships between the Academy and individual employees appeared to constitute one of personal services, which reflect an employer-employee relationship instead of an independent contractual one. The expenses for the services provided by these agreements totaled over $4 million per year, which represented about 17 percent of the annual Academy appropriation for salaries and benefits. Over the course of our review, we found that various actions were taken, and were in process, that were intended to improve the Academy’s and its affiliated organizations’ internal controls. For example, on October 1, 2007, MARAD established the Academy Fiscal Oversight and Administrative Review Board (Oversight Board). The Oversight Board is chaired by the MARAD CFO and is charged with providing fiscal oversight and administrative management of the Academy in coordination with the Maritime Administrator and other MARAD and Academy officials. Another significant action was the creation in July 2008 of the position of Assistant Chief Financial Officer for the Academy with direct reporting responsibility to the MARAD CFO. This position was initially temporary, but made permanent in March 2009. It provides for a senior financial official at the Academy to conduct oversight and monitoring of Academy financial activities on a real time basis. This action, combined with much needed organizational support by MARAD officials, provides an important signal emphasizing a focus on the importance of financial accountability. MARAD also subsequently submitted a legislative proposal to Congress seeking authority to convert the NAFI positions to civil service employment positions. In the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, Congress provided the Administrator with authority to appoint current NAFI employees to competitive civil service positions for terms of up to 2 years. Further, MARAD submitted a legislative proposal to Congress seeking statutory authority to enter into personal services contracts with part-time adjunct professors. In the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, Congress provided the Administrator with temporary authority for the 2008-2009 academic year to contract with up to 25 individuals to provide personal services as adjunct faculty. We also found that the Department and MARAD made a number of improvements in its controls during the course of our review. For example, following discussions with the Department’s Chief Financial Officer, the MARAD CFO, and the Inspector General and staff during October 2008, the MARAD CFO shortly thereafter took steps to secure and protect the accumulated prior years’ balances—held in commercial bank accounts—of midshipmen fees that totaled approximately $1 million as well as excess funds from the current year’s fees that also may be as much as $1 million. We also found that action has been taken or is under way on a number of other important issues as well, including: MARAD directed the Academy to stop facilitating reimbursable contracts on behalf of GMATS. A billing methodology for certain services provided by the Academy to GMATS is under development. The use of FCO to obtain over $4 million a year in illegal personal services was discontinued in 2008. MARAD is working with Academy officials to address the inappropriate commingling of activities that we describe in this report involving the Academy athletics and waterfront departments and certain NAFIs. In October 2008, the Maritime Administrator announced the selection of a new Superintendent. We met with the Superintendent, the MARAD CFO, and the Academy’s Assistant CFO to discuss the Academy’s significant flaws in controls and the business risks that our work was identifying. We also communicated our view that the Academy should aggressively move forward with change efforts and not wait for a formal report from us with targeted recommendations for action. The Superintendent agreed with our suggestions. On March 9, 2009, the Secretary reported several violations of the ADA at the Academy to the President, the Congress, and the Comptroller General, as required by the act. The Secretary estimated that the multiple violations totaled as much as $20 million. Further, the Secretary reported that corrective and disciplinary action had been taken with respect to the officials responsible for the violations and that MARAD and the Academy had revised internal control procedures and taken actions and had other actions under way to improve internal controls at the Academy. Finally, the Omnibus Appropriations Act, 2009, placed certain restrictions and limitations on the use of appropriations made for the Academy for fiscal year 2009. For all apportionments made (by the Office of Management and Budget) of these appropriations for the Academy, the act required the Secretary to personally make all allotments to the MARAD Administrator, who must hold all of the allotments. In addition, the act conditioned the availability of 50 percent of the amount appropriated on the Secretary’s, in consultation with the MARAD Administrator, completing and submitting to the congressional appropriations committees a plan on how the funding will be expended by the Academy. The problems we identified concerning improper or questionable sources and uses of funds involving the Academy and its affiliated organizations, including the known and possible violations of the ADA described in this report, undermines the Academy’s ability to carry out its basic stewardship responsibilities and to comply with the ADA and other legal and regulatory requirements, and may also impair its ability to efficiently achieve its primary mission—to educate midshipmen. These problems can be attributed to a weak overall control environment and the flawed design and implementation of internal controls. Revelations of such activities call into question the stewardship responsibilities of the Academy and signal failures of oversight and governance responsibilities. Moreover, such activities reflect unmitigated risks posed by the Academy’s close organizational and transactional relationships with its NAFIs, including the lack of clearly defined roles and responsibilities. If such improper and questionable activities are not prevented or detected in a timely manner, they may adversely impact the Academy’s credibility. The Academy, MARAD, and the Department have begun important steps to improve the control environment and address internal control weaknesses at the Academy, including new leadership at the top and newly energized oversight and monitoring practices. However, a comprehensive strategy for addressing these weaknesses and establishing internal control policies and procedures across virtually all aspects of the Academy’s financial activities are not yet in place. Further, given the amount of improper and questionable uses of funds detailed in this report, MARAD and the Academy should consider recovering funds that were improperly paid. Vigilance by MARAD and the Department in their oversight and monitoring of the Academy and greater transparency in the Academy’s relationships and transactions with its affiliated organizations will be crucial to achieving effective accountability over the Academy’s funds and other resources. Sustained commitment to sound accountability practices by leaders and management at the Department, MARAD, and especially at the Academy will be critical to long-term success. We make 47 recommendations to the Department of Transportation directed at improving internal controls and accountability at the Academy and to address issues surrounding the improper and questionable sources and uses of funds. We recommend that the Secretary of the Department of Transportation take the following actions: To determine whether the Academy complied with the ADA, we recommend that the following actions be taken: Determine whether legal authority exists to retain payments to the Academy from GMATS, both in Academy appropriations accounts and in commercial bank accounts of affiliated organizations, and if not, adjust the Academy’s appropriations accounts to charge available Academy appropriations and expense accounts for the amount of official Academy expenses that were paid by funds received from GMATS or paid directly by GMATS on behalf of the Academy. To the extent that insufficient appropriations remain available for these expenses report ADA violations as required by law. Determine the amount of midshipmen fees that were used to cover official Academy expenses without legal authority to do so and adjust the Academy’s accounts, as necessary, to charge available appropriations for such expenses. To the extent that insufficient appropriations remain available, report ADA violations as required by law. To provide reasonable assurance that the Academy will comply with the ADA and other applicable laws and regulations, we recommend that the following action be taken: Perform a review of the funds control processes at the Academy and take actions to correct any deficiencies that are identified. We recommend that the Secretary of the Department of Transportation direct the Administrator of MARAD, in coordination with the Superintendent of the Academy, to take the following actions: To improve the design and operation of the internal control system at the Academy, we recommend that the following actions be taken: Establish a comprehensive risk-based internal control system that addresses the core causes and the challenges to proper administration that we identify in this report, including the risks and challenges that flow from the close organizational and transactional relationships between the Academy and its affiliated organizations and implement internal controls that address the elements of our Standards for Internal Control in the Federal Government, including the role and responsibilities of management and employees to establish and maintain a positive and supportive attitude toward internal control and conscientious management, and the responsibility for managers and other officials to monitor control activities. Implement a program to monitor the Academy’s performance, including: reviews of periodic financial reports prepared by Academy officials; and reviews of the Academy’s documentation and analysis from its review of its periodic financial reports and associated items, such as the results of its follow-up on unusual items and balances. To improve internal controls over activities with its affiliated organizations, we recommend that the Academy take the following actions: Perform a comprehensive review and document the results of an analysis of the risks posed by the Academy’s organizational structure and its relationships with each of its affiliated organizations, including: address the inherent organizational conflicts of interest that we identify in this report regarding Academy managers having responsibility for activities with affiliated organizations that are in conflict with the managers’ Academy responsibilities, and determine whether the current organizational structure should be maintained or whether an alternative organizational structure would be more efficient and effective, while at the same time reducing risk and facilitating improvement in internal control and accountability. Require that all affiliated organizations have approved governing documents and that the functions they will perform in the future are consistent with their scope of authority. Perform an analysis to identify each activity involving the Academy and its affiliated organizations and for each activity determine: the business purpose; the reason for Academy involvement; the business risk that each activity presents; and if the activity complies with law, regulation, and policy. Design a robust system of checks and balances for each activity with each affiliated organization that is consistent with the business risk that each activity presents considering, among other things, the nature and volume of the activities with each affiliated organization. Establish formal written policies and procedures for each activity involving the Academy and an affiliated organization and specify for each activity: the required documentation requirements, necessary approvals and reviews, and requirements for transparency (e.g., require regular financial reports for each activity for review and approval by Academy management and MARAD officials charged with oversight). Establish internal controls for each activity with each affiliated organization, including (1) the planned timing of performance of the control activity (e.g., periodic reconciliations of billings with collections); (2) the responsibilities for oversight and monitoring and the documentation requirements for those performing oversight and monitoring functions; and (3) the necessary direct, compensating and mitigating controls for each activity. To improve accountability and internal controls over midshipmen fee activities and to resolve potential issues surrounding the past collections and uses of midshipmen fees, the Academy should take the following actions: Perform an analysis to identify all midshipmen fee collections for fiscal years 2006, 2007, and 2008, to include: identifying those items for which the fee collected is attributable to (1) an activity between the midshipmen as customer and a NAFI as service provider (e.g., collections for haircuts); and (2) an activity between the midshipmen as customer and the Academy as service provider (e.g., collections for personal computers). Determine if the (1) fee collected for each item was for a personal item of the midshipmen and consistent with law, regulation, and policy for such collections; (2) amount of the fee collected for each item was properly supported, based on, among other things, an analysis of the cost to the Academy for the good or service; and (3) amount collected exceeded the cost of the good or service. Determine if any liability may exist for collections that (1) are not consistent with law, regulation, and policy as personal items of the midshipmen; (2) were not properly supported, in whole or part; and (3) exceeded the cost to the Academy for the good or service. Perform an analysis to identify all payment activity and other uses of the funds collected for midshipmen fees for fiscal years 2006, 2007, and 2008, to include: reviewing payment activity to identify the payees, amounts, and other characteristics of the uses of the funds collected and conducting a detailed review of payment activity and other uses (e.g., transfers to prior years’ reserves) for items considered as high risk. Review all questionable payments, and other questionable uses of funds, such as transfers to commercial checking accounts for the excess of collections over funds used, as well as the questionable payments that we identify in this report. For each payment and other use of funds that is determined to be for other than a proper governmental purpose and that is not consistent with law, regulation, and policy, consider pursuing recovery from the organization or individual that benefited from the payment. Establish policies and procedures that require those charged by the Academy with the responsibility for midshipmen fee collections and payments to: (1) maintain detailed accounting records for all midshipmen fee activity that reflect accurate and fully supported information on collections, payments, and other activity that is consistent with document retention practices; (2) implement written review and approval protocols for all midshipmen fee collections and uses of funds consistent with policies and procedures established by the Academy and MARAD; and (3) provide monthly detailed reports of all midshipmen fee activity in the aggregate and by item to Academy and MARAD officials. Establish policies and procedures and perform the necessary analysis to support annual reports to the Congress to address changes in “any item or service” in midshipmen fees from that existing in 1994 as required by law. Establish written policy and criteria for determining the baseline items that are properly due from midshipmen for personal items, the amount of fees to be collected (based on underlying studies), and the approved uses of the fees collected. Establish written policy for the underlying analysis that is required and the approvals that must be obtained before changes are made in the baseline of midshipmen fee items, or before a change is made in the amount of such fees, or in the approved uses of the fees collected. Utilize the information obtained from the analysis of midshipmen fees collected in prior years and other work to determine the amount of midshipmen fees that should be charged to midshipmen for personal items in subsequent years. Establish written policy for internal reviews of monthly reports of midshipmen fee activity and balances, identified anomalies, and questioned items as well as the results from the associated follow-up. Perform an analysis to determine whether and, if applicable, the extent to which appropriated funds and midshipmen fees collected should be used to pay for contracted medical services. To improve internal controls over financial information, the Academy should take the following actions: Implement financial reporting policies and procedures that, among other things, will provide visibility and accountability to Academy activities and balances to facilitate oversight and monitoring, including: (1) periodic reporting of actual and budget amounts for revenues and expenses for the current and cumulative period; (2) periodic reporting of amounts for activity and balances with affiliated organizations in detail; and (3) identification of items of revenue and expense for each funding source, including annual and no-year appropriated funds and other collections. Implement comprehensive policies and procedures for the review of financial reports, to include requiring: reviews by the preparers of the financial reports as to their completeness and accuracy; evidence of departmental management reviews; and written records of identified anomalies and questioned items, as well as requirements for maintaining evidence of the results from associated follow-up on all identified anomalies and questioned items. Identify and evaluate the potential misstatements of amounts in the financial records for the Academy in fiscal years 2006, 2007, and 2008 to determine if restatement or reassurance of budget and financial reports and statements prepared from those records is appropriate, including: $5,076,198 of errors in accounting for repairs and maintenance expenses and capital additions, and $3,431,725 of expenses that were improperly funded with no-year capital improvement appropriations; $6,410,242 and $6,038,061 of recorded revenue and expenses, respectively, from GMATS training programs; amounts for midshipmen fee collections and payment activity including effects on reported revenues, expenses, assets and liabilities; and amounts for sources and uses of funds handled “off- book” that we identify in this report, including transactions in three Superintendent’s Reserves and with GMATS and FCO. Implement policies and procedures to obtain the information necessary to timely comply with the requirement identified in this report for annual reports to the Congress that provides all expenditure and receipt information for the Academy and its affiliated organizations. To improve accountability and internal controls over the acquisition of personal services from NAFIs, and to resolve potential issues surrounding past personal services activities and payments, the Academy should take the following actions: Perform an analysis to identify the nature and full scope of personal services activities and the associated sources and uses of funds to include a review of all questionable payments, including those that we identify in this report for personal services totaling more than $8 million for fiscal years 2006 and 2007. For each such personal services arrangement: (1) determine if the amounts paid were consistent with the services received by the Academy; (2) quantify the amounts, if any, paid by the Academy for personal services that were not received by the Academy; and (3) document the decisions made with respect to any payments by the Academy for personal services that were not received, including decisions to seek recovery from other organizations for such amounts. Develop written policy guidance on acquiring services from NAFIs that complies with the requirements of law, regulation, and policy on the proper use of funds by the Academy. To address funds held in commercial bank accounts of the FCO from prior years’ reserves and Superintendent’s Reserves and to resolve issues surrounding the past collections and uses of funds for excess midshipmen fee collections, the Academy should take the following actions: Perform an analysis to identify all activities in the prior years’ and other reserves including all sources and uses of funds for fiscal years 2006, 2007 and 2008. Review all the questionable payments and other activity, including payments that we identify in this report that according to FCO records total $605,347. For each payment that is determined to be for other than a proper governmental purpose and that is not consistent with law, regulation, and policy, consider pursuing recovery from the organization or individual that benefited from the payment. Investigate the unexplained $100,000 transaction(s) in September 2007 per the off-line or “cuff” accounting records maintained by FCO and take actions as appropriate. Finalize actions to protect and recover Academy funds held in commercial bank accounts by the FCO from current and prior years’ midshipmen fees that totaled approximately $2 million at September 30, 2008. Require that: (1) bank reconciliations be prepared for all activity in the commercial bank accounts of the FCO used for these reserves during fiscal years 2006, 2007 and 2008; (2) documentation be prepared for all questionable items as well as the related follow-up; and (3) going forward such bank reconciliations be timely prepared and independently reviewed by Academy staff with no direct involvement in the reconciliations or the activity in the bank accounts. To improve internal controls over activities with GMATS, the Academy should take the following actions: Perform an analysis to identify all activities between the Academy and the NAFI, GMATS, during fiscal years 2006 and 2007 and determine for each activity: the nature of the activity; the amounts collected by the Academy or others for the benefit of the Academy; the nature and amounts paid, by the Academy or by others for the benefit of the Academy from the funds collected; the business purpose; the reason for Academy involvement; and if the activity complies with law, regulation, and policy. For each payment that is determined to be for other than a proper governmental purpose and that is not consistent with law, regulation, and policy, consider pursuing recovery from the organization or individual that benefited from the payment. Establish formal written policies and procedures that, among other things, specify the allowable activities and transactions between the Academy and GMATS, and details the necessary approvals and reviews required for each activity. Establish targeted internal controls for each direct and indirect activity between the Academy and GMATS. To improve internal controls over accruals and to resolve potential issues surrounding past “parking” of appropriated funds, the Academy should take the following actions: Perform an analysis to identify all activities involving accrual accounts used to “park” appropriated funds with the FCO, including all sources and uses of funds for fiscal years 2006 and 2007. For each payment that is determined to be for other than a proper governmental purpose, consider pursuing recovery from the organization or individual that benefited from the payment. Establish written policy guidance on the accrual of items of expense at year-end. Establish targeted internal controls that, among other things, provide the criteria for accruals, specify the documentation requirements for accruals, and provide management’s review and approval procedures. To improve internal controls over activities from usage of the training vessel—Kings Pointer and other Academy-owned boats—by others, the Academy should take the following actions: Perform an analysis to identify all activity involving the use of the Kings Pointer and Academy-owned boats by others, including all sources and uses of funds for fiscal years 2006 and 2007. Identify and recover the cost of any unreimbursed non- governmental uses, to the extent authorized by law. For each payment, including payments to affiliated organizations, that is determined to be for other than a proper governmental purpose and that is not consistent with law, regulation, and policy, consider pursuing recovery from the organization or individual that benefited from the payment. Establish written policies and procedures to govern the use of the Academy-owned training vessel the Kings Pointer and other boats, including addressing issues for ship’s crews, insurance, security, billing procedures, and other responsibilities. Perform or contract out for a comprehensive usage-rate study to establish usage rates. Such a study should include (1) consideration of the full cost to the Academy of the training vessels and other boats, including salaries and benefits of Academy personnel, major repairs, routine maintenance, non-routine maintenance and long-term repairs, fuel and dockage; and (2) identification of indirect expenses and imputed costs as appropriate (e.g., depreciation). Establish policy for the timing and extent of the analysis required for periodic updates to the usage-rate study. In coordination with the Department or MARAD legal counsel, as appropriate, determine if the Academy had the legal authority to retain and use any collections from the use of the Academy-owned training vessel the Kings Pointer and other boats; otherwise, deposit them in the general fund of the U.S. Treasury. To improve internal controls over camps and clinics operated by the Athletics Association NAFI or others on Academy property, the Academy should take the following actions: Perform an analysis to identify practices at the Academy involving camps and clinics operated by the Athletics Association or others using Academy property and other assets. Document the nature and scope of such activities, including all sources and uses of funds for fiscal years 2006 and 2007 and take corrective action on any improper transactions. Establish written policies and procedures for camps and clinics operated by the Athletics Association NAFI or others on Academy property. Establish targeted internal controls that include: approvals required; costs to be recovered by the Academy; requirements (such as advance approval) for participation by Academy employees in the activities; and other matters of importance such as, insurance requirements, security, and required accountings to be provided to the Academy on the sources and uses of funds from each event. To improve internal controls over processing of vendor invoices and accounting for repairs and maintenance expenses and additions to capital assets, the Academy should take the following actions: Perform an analysis to identify the causes of the errors in the recording of repairs and maintenance expenses that should have been capitalized totaling $5,076,198, and $3,431,725 of expenses that were improperly funded with the no-year capital improvement appropriation, during fiscal years 2006 and 2007. Establish written policies and procedures for repairs and maintenance expenses and capital asset additions that require: (1) periodic reviews of recorded amounts for repairs and maintenance expenses and capital asset additions to identify and timely address issues requiring management attention; and (2) correction of errors before financial reports are prepared from the books and records. Establish polices and procedures for periodic reporting of financial information for repairs and maintenance expenses and capital additions to assist users in monitoring these items as well as the funding sources—annual appropriations or no-year appropriations for long-term improvement projects. We received written comments from the Department of Transportation on a draft of this report (see app. V). The Department stated that the Academy and MARAD have initiated many corrective actions to address the internal control weaknesses identified in our draft report and that management at the Academy, MARAD, and the Department take very seriously our findings and recommendations. The Department also stated that MARAD will produce a comprehensive strategy and corrective action plan to address all of the internal control weaknesses, as well as a detailed response to each recommendation. The Department also separately provided technical comments that we incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. We will then send copies to other appropriate congressional committees, the Secretary of Transportation; the Administrator, Maritime Administration; and the Superintendent, United States Merchant Marine Academy. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2600 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This report responds to your request that we study the internal control environment and selected activities and expenditures of the Academy and its non-appropriated fund instrumentalities (NAFIs), in addition to the oversight and monitoring practices by the Maritime Administration (MARAD), an operating administration of the Department of Transportation. Our specific objectives were to determine whether there (1) were any potentially improper or questionable uses of funds by the Academy, including transactions with its affiliated organizations; (2) was an effective control environment with key controls in place over the Academy’s sources and uses of funds, including transactions with its affiliated organizations; and (3) were any actions taken, under way, or planned to improve controls and accountability over the Academy’s funds and resources. To address the first two objectives, we analyzed whether the Academy’s policies and procedures were adequate to ensure that Academy funds were used as intended and for proper governmental purposes and assessed the Academy’s internal controls over its activity and balances against our Standards for Internal Control in the Federal Government, Internal Control Management and Evaluation Tool, Guide for Evaluating and Testing Controls Over Sensitive Payments, and Strategies to Manage Improper Payments. Specifically, we reviewed laws, regulations, policies, and procedures over Academy operations and activities; reviewed the MARAD report and discussed the objectives, scope, and methodology of the internal control review with MARAD officials; interviewed selected Department, Department—Office of Inspector General (OIG), MARAD, Academy, and NAFI staff and officials to obtain an understanding of (1) their roles and responsibilities, (2) the internal control environment at the Academy, including the Academy’s organizational structure and relationships to the NAFIs and management’s attitude towards and knowledge of internal controls; (3) the internal controls over selected Academy payments and activities with its affiliated organizations—the 14 NAFIs and 2 foundations; and (4) MARAD and Department practices for overseeing and monitoring the Academy; and obtained an understanding of the sources of funding for both the Academy and the NAFIs, including the appropriated funds of the Academy. We obtained a database of Academy expenses at the transaction level covering fiscal years 2006 and 2007 and compared these data to amounts reported for the Academy by MARAD in the Department’s annual performance and accountability reports; compared the total amounts—MARAD including the Academy—in the database provided to us with the amounts in the statements of net cost that MARAD submitted to the Department; reconciled the MARAD Statement of Net Cost in the database to the Department’s audited financial statements by agreeing the net cost amounts for MARAD including the Academy that were reported in the audited financial statements and separately identified in consolidating statements of net cost schedules for the Department; analyzed Academy and NAFI payments, Academy collections of midshipmen fees, and funds from the FCO and GMATS to identify selected payments for further testing; and reviewed available documentation supporting selected Academy payment transactions and requested additional support and explanations from Academy and NAFI officials to justify the purpose of these transactions and the sources of funds used. To review the collection of current year’s fees from midshipmen and use of those fees for fiscal years 2006 and 2007, as well as prior years’ reserve activity for fiscal years 2006 to 2008, we analyzed the collection and payment activity reflected in records maintained by the FCO; requested and reviewed available support to justify the amounts collected from the midshipmen; interviewed Academy and NAFI officials with responsibility for midshipmen fee collections; discussed the results of our analysis with Academy and FCO officials and as appropriate requested additional information and explanations from these officials; and considered the support and responses we received to assess whether the collection and use of midshipmen fees were questionable. We identified numerous improper or questionable activities and uses of funds. However, the results of our work are not generalizable to the population of transactions as a whole because we selected transactions on a nonstatistical basis. We selected transactions that were significant to the Academy or the NAFIs and appeared to have a higher risk of being improper. Consequently, there may be other improper or questionable activities and transactions that our work did not identify. We reviewed the March 9, 2009, report of the Secretary to the President, the Vice President (as President of the Senate), the Speaker of the House, and the Acting Comptroller General to report several violations of the Antideficiency Act that occurred over several years and that the Department estimated totaled as much as $20 million. To address our third objective, we obtained relevant documentation on actions taken, under way, or planned, including the MARAD order establishing the Academy’s Fiscal Oversight and Administrative Review Board. During our review, we visited the Academy in Kings Point, New York, and MARAD and Department headquarters in Washington, D.C. We also held teleconferences with Academy officials in New York and MARAD and Department officials in Washington, D.C. We also reviewed prior OIG and GAO reports for items of possible relevance to MARAD and Academy activities and internal controls. We conducted this performance audit from June 2008 to August 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The following presents the organizational environment in which the Academy operates and illustrates the nature and amount of some of the activity that occurred during fiscal year 2007 between the Academy and its affiliated organizations. Payments to NAFIs Total expenses ( $ 0.4) The ADA is one of the major laws in the statutory scheme by which the Congress exercises its constitutional control of the public purse. The ADA contains both affirmative requirements and specific prohibitions, as highlighted below. The ADA: Prohibits the incurring of obligations or the making of expenditures in advance or in excess of an appropriation. For example, an agency officer may not award a contract that obligates the agency to pay for goods and services before the Congress makes an appropriation for the cost of such a contract or that exceeds the appropriations available. Requires the apportionment of appropriated funds and other budgetary resources for all executive branch agencies. An apportionment may divide amounts available for obligation by specific time periods (usually quarters), activities, projects, objects, or a combination thereof. OMB, on delegation from the President, apportions funds for executive agencies. Requires a system of administrative controls within each agency, established by regulation, that is designed to (1) prevent obligations and expenditures in excess of apportionments or reapportionments; (2) fix responsibility for any such obligations or expenditures; and (3) establish the levels at which the agency may administratively subdivide apportionments, if it chooses to do so. Prohibits the incurring of obligations or the making of expenditures in excess of amounts apportioned by OMB or amounts of an agency’s subdivision of apportionments (i.e., “allotments”). Prohibits the acceptance of voluntary services or the employment of personal services, except where authorized by law. Specifies potential penalties for violations of its prohibitions, such as suspension from duty without pay or removal from office. In addition, an officer or employee convicted of willfully and knowingly violating the prohibitions may be fined not more than $5,000, imprisoned for not more than 2 years, or both. Requires that for violations of the act’s prohibitions, the relevant agency report immediately to the President and to the Congress all relevant facts and a statement of actions taken with a copy to the Comptroller General of the United States. The requirements of the ADA and the enforcement of its proscriptions are reinforced by, among other laws, the Recording Statute, 31 U.S.C. § 1501(a), which requires agencies to record obligations in their accounting systems, and the 1982 law commonly known as the Federal Managers’ Financial Integrity Act of 1982, 31 U.S.C. § 3512(c), (d), which requires executive agencies to implement and maintain effective internal controls. Federal agencies use “obligational accounting” to ensure compliance with the ADA and other fiscal laws. Obligational accounting involves the accounting systems, processes, and people involved in collecting financial information necessary to control, monitor, and report on all funds made available to federal agencies by legislation—including both permanent, indefinite appropriations and appropriations enacted in annual and supplemental appropriations laws that may be available for 1 or multiple fiscal years. Executive branch agencies use obligational accounting, sometimes referred to as budgetary accounting, to report on the execution of the budget. “ personal services contract is one that, by its express terms or as administered, makes the contractor personnel appear, in effect, government employees. FAR §§ 37.101, 37.104(a). The government is normally required to obtain its employees by direct hire under competitive appointment or other procedures required by the civil service laws. FAR § 37.104(a). Obtaining personal services by contract, rather than by direct hire, circumvents those laws unless Congress has specifically authorized acquisition of the services by contract. Id. Agencies may not award personal services contracts unless specifically authorized by statute to do so. FAR § 37.104(b).” Matter of: Encore Management, Inc., B- 278903.2, Feb. 12, 1999. In addition to the contact named above, staff members who made key contributions to this report include Robert Owens, Assistant Director; Lisa Brownson; F. Abe Dymond, Assistant General Counsel; Tony Eason; Frederick Evans; Tiffany Epperson; Jehan-Abdel-Gawad; Thomas Hackney; Paul Kinney; Scott McNulty; and Meg Mills.
The U.S. Merchant Marine Academy (Academy), a component of the Department of Transportation's Maritime Administration (MARAD), is one of five U.S. service academies. The Academy is affiliated with 14 non-appropriated fund instrumentalities (NAFI) and two foundations. GAO was asked to determine whether there (1) were any potentially improper or questionable sources and uses of funds by the Academy, including transactions with its affiliated organizations; (2) was an effective control environment with key controls in place over the Academy's sources and uses of funds; and (3) were any actions taken, under way, or planned to improve controls and accountability. GAO analyzed selected transactions from fiscal years 2006, 2007, and 2008 to identify improper or questionable sources and uses of funds and reviewed documents and interviewed cognizant officials to assess the Academy's internal controls, and identify corrective actions to improve controls. GAO identified numerous instances of improper and questionable sources and uses of funds by the Academy and its affiliated organizations. These improprieties and questionable payments GAO identified demonstrate that, while MARAD and the Academy have been taking action to improve the Academy's internal controls, the Academy did not have assurance that it complied with applicable fund control requirements, including the Antideficiency Act (ADA). Further, the Academy had numerous breakdowns in its important stewardship responsibilities with respect to maintaining accountability over the receipt and use of funds. For example, GAO identified improper and questionable midshipmen fee transactions related to: (1) fee collections and uses of fees unrelated to goods and services provided to all midshipmen, (2) fee collections that exceeded the actual expense to the Academy for the goods or services, and (3) the use of accumulated excess midshipmen fees for improper and questionable purposes. GAO found that a weak overall control environment and the flawed design and implementation of internal controls were the root causes of the Academy's inability to prevent or effectively detect numerous instances of improper and questionable sources and uses of funds. Specifically, GAO found that there was a lack of awareness or support for strong internal control and accountability across the Academy at all levels and risks, such as those that flow from a lack of clear organizational roles and responsibilities and from significant activities with affiliated organizations. The internal control weaknesses GAO identified were systemic and could have been identified in a timely manner had Academy and MARAD management had a more effective oversight and monitoring regimen. For example, GAO found that the Academy did not routinely prepare financial reports and information for use by internal and external users. GAO found that various actions were taken and in process that were intended to improve the Academy's internal controls, including actions to address issues of accountability with its affiliated organizations. For example, a permanent position of Assistant Chief Financial Officer (CFO) for the Academy was established in March 2009 with direct reporting responsibility to the MARAD CFO. This action provides a senior financial official at the Academy with authority to conduct needed oversight and monitoring of financial activities on a real time basis. Further, following discussions GAO had with Department and MARAD officials, the MARAD CFO took steps to secure and protect accumulated reserves held in commercial bank accounts of an affiliated organization. However, even though MARAD and the Academy have taken actions, much more needs to be done, including determining the amount of midshipmen fees that were used to cover official Academy expenses, performing a comprehensive analysis of the risks posed by the Academy's organizational structure and its relationships with its affiliated organizations, and establishing and implementing policies, procedures, and internal controls over many Academy activities.
Managed Accounts in Other Workplace Defined Contribution Plans and Individual Retirement Accounts (IRAs) As managed accounts have gained popularity in 401(k) plans, there are indications that they may also be gaining popularity in government and non-profit workplace retirement savings plans, commonly referred to as 457 or 403(b) plans. Many of the providers we spoke to that offer managed accounts to 401(k) plans also offer services to other plans like these. In addition, some providers are starting to offer managed accounts in IRAs, and in particular rollover IRAs—when participants separate from their employer they may decide to roll their funds into an IRA. One of these providers noted that it is easier to engage participants who use managed accounts through products such as IRAs, and there is more flexibility with investment options, even though the provider’s marketing costs may be higher. Under Title I of the Employee Retirement Income Security Act of 1974 (ERISA), as amended, employers are permitted to sponsor defined contribution plans in which an employee’s retirement savings are based on contributions and the performance of the investments in individual accounts. Typically, 401(k) plans—the predominant type of defined contribution plan in the United States—allow employees who participate in the plan to specify the size of their contributions and direct their assets to one or more investments among the options offered within the plan. Investment options generally include mutual funds, stable value funds, company stock, and money market funds. To help participants make optimal investment choices, an increasing number of plans are offering professionally managed allocations—including managed accounts—in their 401(k) plan lineups. Managed accounts are investment services under which providers make investment decisions for specific participants to allocate their retirement savings among a mix of assets they have determined to be appropriate for the participant based on their personal information. As shown in figure 1, managed accounts were first offered to 401(k) plans around 1990, but most providers did not start offering them until after 2000. Managed accounts differ from other professionally managed allocations, such as target date funds and balanced funds, in several key ways. Target date funds (also known as life cycle funds) are products that determine an asset allocation that would be appropriate for a participant of a certain age or retirement date and adjust that allocation so it becomes more conservative as the fund approaches its intended target date. Target date funds do not place participants into an asset allocation; instead, participants generally self-select into a target date fund they feel is appropriate for them based on the fund’s predetermined glide path that governs asset allocation. Balanced funds are products that generally invest in a fixed mix of assets (e.g., 60 percent equity and 40 percent fixed income assets). While target date funds manage the fund to reach a target date, managed accounts may consider other, more personalized factors such as a participant’s stated risk tolerance, even though they are not required to do so. As shown in figure 2, managed accounts may offer higher levels of personalization than other types of professionally managed allocations. Managed accounts are generally considered to be an investment service—not one of the plan’s investment options—while target date funds are considered to be investment options. In the latter, participants can invest all or a portion of their 401(k) plan contributions in a target date fund, but generally cannot directly invest in a managed account. Instead, the role of the participant is to enroll in the managed account service, or be defaulted into it, generally relinquishing their ability to make investment decisions unless they disenroll from, or opt out of, the managed account. As shown in figure 3, managed account providers decide how to invest contributions, generally among the investment options available in a 401(k) plan, and then manage these investments over time to help participants reach their retirement savings goals. By comparison, participants not enrolled in a managed account have to make their own decisions about how to invest their 401(k) plan contributions. DOL’s Employee Benefits Security Administration (EBSA) is the primary agency through which Title I of ERISA is enforced to protect private pension plan participants and beneficiaries from the misuse or theft of their pension assets. To carry out its responsibilities, EBSA issues regulations and guidance; investigates plan sponsors, fiduciaries, and service providers; seeks appropriate remedies to correct violations of the law; and pursues litigation when it deems necessary. As part of its mission, DOL is also responsible for assisting and educating plan sponsors to help ensure the retirement security of workers and their families. In 2007, DOL designated certain managed accounts as one type of investment that may be eligible as a qualified default investment alternative (QDIA) into which 401(k) plan fiduciaries may default participants who do not provide investment directions with respect to their plan contributions. DOL designated three categories of investments that may be eligible as QDIAs if all requirements of the QDIA regulation have been satisfied—these categories generally include: (1) an investment product or model portfolio that is designed to become more conservative as the participant’s age increases (e.g., a target date or lifecycle fund); (2) an investment product or model portfolio that is designed with a mix of equity and fixed income exposures appropriate for the participants of the plan as a whole (e.g., a balanced fund); and (3) an investment management service that uses investment alternatives available in the plan and is designed to become more conservative as the participant’s age increases (e.g., a managed account). DOL regulations indicate that plan fiduciaries who comply with the QDIA regulation will not be liable for any loss to participants that occurs as a result of the investment of their assets in a QDIA, including investments made through managed account arrangements that satisfy the conditions of the QDIA regulation. However, plan fiduciaries remain responsible for the prudent selection and monitoring of any QDIA offered by the plan. To obtain relief, plan fiduciaries must provide participants with advance notice of the circumstances under which plan contributions or other assets will be invested on their behalf in a QDIA; a description of the QDIA’s investment objectives, risk and return characteristics, and fees and expenses; and the right of participants to opt out of the QDIA, among other things. A 2012 survey of defined contribution plan sponsors by PLANSPONSOR indicated that managed accounts were used as a QDIA less than 5 percent of the time. Managed accounts are also offered as opt-in services by over 30 percent of defined contribution plan sponsors. Managed accounts can be offered as both QDIA and opt-in services, allowing the plan sponsor to choose which services to offer their participants. Plan fiduciaries who offer managed account services only to participants who affirmatively elect to use the service (i.e., on an opt-in basis), rather than by default, are not required to comply with the QDIA regulation, although such fiduciaries still are subject to the general fiduciary obligations under ERISA with respect to the selection and monitoring of a managed account service for their plan. Plan sponsors, including those who offer managed account services in their 401(k) plans, are required to issue a variety of informational disclosures and notices to plan participants and beneficiaries at enrollment, on a quarterly and annual basis, and when certain triggering events occur. These disclosures—often referred to as participant-level disclosures—when made in accordance with regulatory requirements, help ensure that plan participants have access to the information they need to make informed decisions about their retirement investments. In addition, when a plan sponsor chooses to default participants into managed accounts as a QDIA, the sponsor must inform participants of this decision annually through a number of specific disclosures, based on the plan’s design. The QDIA disclosures, when made in accordance with regulatory requirements, provide relief from certain fiduciary responsibilities for sponsors of 401(k) plans. Service providers that provide managed account services to a plan may be required to provide certain disclosures about the compensation they will receive to plan sponsors offering a managed account service under different DOL disclosure requirements. These disclosures—often referred to as service provider disclosures—are intended to provide information sufficient for sponsors to make informed decisions when selecting and monitoring service providers for their plans. DOL’s final rule on these disclosures requires service providers to furnish sponsors with information to help them assess the reasonableness of total compensation paid to providers, to identify potential conflicts of interest, and to satisfy other reporting and disclosure requirements under Title I of ERISA, including the regulation governing sponsor’s disclosure to participants. Managed account provider roles may differ from those of other plan service providers. As shown in figure 4, when a plan sponsor decides to offer participants a managed account service, other entities may contribute to its implementation and operation. Some record keepers and intermediary service providers refer to themselves as “managed account providers” because they make this service available to participants, but they do not ultimately decide how to invest participant contributions. Similarly, even though target date fund managers or collective investment trust managers may select an overall asset allocation strategy and investments to fit that strategy for the funds they offer to 401(k) plan participants, these managers also do not ultimately decide how to invest participant accounts. Plan sponsors are typically the named fiduciaries of the plan. Managed account providers and record keepers may also be fiduciaries, depending on their roles and the services they provide. Fiduciaries are required to carry out their responsibilities prudently and solely in the interest of the plan’s participants and beneficiaries. Plan service providers that have investment discretion or provide investment advice about how to invest participant accounts generally may be “3(38) Investment Manager” fiduciaries or “3(21) Investment Adviser” fiduciaries. A 3(38) Investment Manager fiduciary can only be a bank, an insurance company, or a Registered Investment Adviser (RIA). Under ERISA, 3(38) Investment Manager fiduciaries have the power to manage, acquire, or dispose of plan assets, and they acknowledge, in writing, that they are a fiduciary with respect to the plan. In contrast, a 3(21) Investment Adviser fiduciary usually does not have authority to manage, acquire, or dispose of plan assets, but is still a fiduciary because its investment recommendations may exercise some level of influence and control over the investments made by the plan. When managed account services are offered as QDIAs, the managed account provider is generally required to be a 3(38) Investment Manager fiduciary. There is no similar explicit requirement for managed account providers whose services are offered within a plan on an opt-in basis. Managed account providers vary how they provide services, even though they generally offer the same basic service—initial and ongoing investment management of a 401(k) plan participant’s account based on generally accepted industry methods. The eight providers in our case studies use different investment options, employ varying strategies to develop and adjust asset allocations for participants, incorporate varying types and amounts of participant information, and rebalance participant accounts at different intervals. As a result, participants with similar characteristics in different plans may have differing experiences. To develop participant asset allocations, most of the eight providers in our case studies use the investment options chosen by the plan sponsor. By contrast, other providers require plan sponsors that want to offer their managed account to accept a preselected list of investment options from which the provider will determine participant asset allocations, including exchange traded funds or asset classes not typically found in 401(k) plan lineups, such as commodities. Because they are atypical investment options, participants who do not sign up for managed accounts may not be able to access them. Compared to typical 401(k) plan investment options, these atypical investment options may provide broader exposure to certain markets and opportunities to diversify participant retirement assets. The eight managed account providers in our case studies generally reported making asset allocation decisions based on modern portfolio theory, which sets a goal of taking enough risk so that participants’ 401(k) account balances may earn large enough returns over time to meet their retirement savings goals, but not so much that their balances could earn lower or even negative returns. Managed account providers generally help participants by constructing portfolios that attempt to provide maximum expected returns with a given level of risk, but their strategies can range from formal to informal. The formal way of determining this type of portfolio is called “mean-variance optimization” (MVO), under which providers plot risk and return characteristics of all combinations of investment options in the plan and choose the portfolio that maximizes expected return for a given level of risk. There are a number of specific techniques that managed account providers can apply to improve the quality and sophistication of asset allocations, including Monte Carlo simulation. However, some providers incorporated less formal ways of achieving a diversified portfolio, such as active management and experience-based methods. The eight providers in our case studies use varying strategies and participant goals to develop and adjust asset allocations for participants, as shown in table 1. As a result, participants with similar characteristics may end up with different asset allocations. Providers’ use of different asset allocation strategies leads to variation in the asset allocations participants actually experience. As shown in figure 5, four of the eight providers in our case studies vary in their recommendations of specific investment options for a 30-year old participant. The type and amount of information providers use can also affect the way participant account balances are allocated. For example, two of the eight providers in our case studies only offer a customized service—allocating a participant’s account based solely on age or other factors that can be easily obtained from the plan’s record keeper, such as gender, income, current account balance, and current savings rate. The other six providers also offer a personalized service that takes into account additional personal information to inform asset allocations, such as risk tolerance or spousal assets. Providers that offer a personalized service reported that personalization could lead to better asset allocation for participants, but they also reported that generally fewer than one-third, and sometimes fewer than 15 percent, of participants furnish this personalized information. As a result, some industry representatives felt that participants may not be getting the full value of the service for which they are paying. For example, participants who are defaulted into managed accounts that offer a highly personalized service run the risk of paying for services they are not using if they are disengaged from their retirement investments. As shown in table 2, we found that among five of the seven providers that furnished asset allocations for our hypothetical scenarios, there was little relationship between the level of personalization and the fee they charged to participants for the managed account service. Some managed account providers’ services may become more beneficial as participants age or as their situations become more complex because personalization seeks to create a tailored asset allocation for each participant. Such an individualized approach could even mean that older participants who are close to retirement and very young participants just starting their careers could be placed in equally risky allocations based on their personalized circumstances. However, industry representatives told us that participants who never supply additional, personalized information to managed account providers may be allocated similarly over time to those participants in target date funds. Providers differ in their approaches and time frames for rebalancing participant managed accounts—adjusting participant accounts to reflect any changes to their asset allocation strategies based on changing market conditions and participant information. Seven of the eight providers in our case studies use a “glide path” approach to systematically reduce participant risk over time but one does not set predetermined glide paths for participants. Similarly, two of the eight providers in our case studies rebalance participant accounts annually, while the other providers generally review and rebalance participant accounts at least quarterly. Despite these differences in approaches and timeframes, our analysis of provider hypothetical asset allocations indicated that providers generally allocated less to equity assets and more to fixed income or cash-like assets for the older hypothetical participants than for the younger hypothetical participant. Some managed account providers in our case studies offer their services under “direct” arrangements in which the plan sponsor directly contracts with a provider to offer these services, as shown in figure 6. According to the providers we spoke with, managed account providers in this type of arrangement are generally fiduciaries, but record keepers may not be fiduciaries with respect to the managed account service, as their role consists primarily of providing information to the managed account provider and implementing asset allocation changes to participant accounts. By contrast, some managed account providers use “subadvised” arrangements to offer their services. According to the providers we spoke to, in these arrangements, the plan sponsor does not directly contract with the managed account provider, and the plan’s record keeper, or an affiliate, may take on some fiduciary responsibility with respect to the managed account, as shown in figure 7. The record keeper may fulfill some of the responsibilities the managed account provider would have in a direct arrangement. These responsibilities may include providing periodic rebalancing based on the provider’s strategy, marketing managed account services, or offering other ongoing support for participants. All of the eight managed account providers in our case studies told us that they take on some level of fiduciary responsibility—regardless of whether their services are offered as QDIAs or on an opt-in basis—so they each offer some protections to sponsors and participants in managed accounts. Seven of the providers in our case studies told us that they willingly accept 3(38) Investment Manager fiduciary status for discretionary management over participant accounts, but one of the eight providers in our case studies noted that it never accepts 3(38) Investment Manager fiduciary status because it only has discretion over participants’ accounts once a year. This provider told us that it is only a 3(21) Investment Adviser fiduciary even though its managed account service is similar to that of the other providers in our case studies. Under ERISA, 3(21) Investment Adviser fiduciaries usually do not have authority over plan assets, but they may influence the operation of the plan by providing advice to sponsors and participants for a fee. As such, they are generally liable for the consequences when their advice is imprudent or disloyal. In contrast, a 3(38) Investment Manager fiduciary has authority to manage plan assets at their discretion and with prudent judgment, and is also liable for the consequences of imprudent or disloyal decisions. Because 3(38) Investment Manager fiduciaries have explicit discretionary authority and must have the qualifications of a bank, insurance company, or RIA, sponsors who use 3(38) Investment Manager fiduciaries may receive a broader level of liability protection from those providers as opposed to providers who offer managed account services as 3(21) Investment Adviser fiduciaries. In addition, when a 3(38) Investment Manager fiduciary is used, participants may have a broader level of assurance that they are receiving services from a qualified manager in light of the requirements related to qualifications of such fiduciaries. As noted previously, when managed account services are offered as QDIAs, DOL requires the managed account provider to generally be a 3(38) Investment Manager fiduciary, but DOL has no similar explicit requirement for managed account providers whose services are offered on an opt-in basis. Absent explicit requirements or additional guidance from DOL, some managed account providers may choose to structure the services they provide to limit their fiduciary liability, which could ultimately provide less liability protection for sponsors for the consequences of provider investment management choices. Given the current lack of direction or guidance about appropriate fiduciary roles for providers that offer managed accounts on an opt-in basis, sponsors may not be aware of this potential concern. Industry representatives we spoke with expressed concern about managed account providers who do not accept full responsibility with respect to managed account services by acknowledging their role as a 3(38) Investment Manager fiduciary. Other representatives also noted that it was important for sponsors to understand providers’ fiduciary responsibilities given the important differences between 3(21) Investment Adviser and 3(38) Investment Manager fiduciaries with respect to the nature of liability protection they may provide for sponsors and the services they may provide for both sponsors and participants. Managed account providers may offer potentially valuable additional services to participants in or near retirement regarding how to spend down their accumulated retirement savings, but these services could lead to potential conflicts of interest. Most of the providers in our case studies allow participants to continue receiving account management services when they retire as long as they leave all or a portion of their retirement savings in the 401(k) plan. Some of those providers also provide potentially useful additional services to participants in or near retirement and do not typically charge additional fees for doing so. These services may include helping participants review the tax consequences of withdrawals from their 401(k) account and advising them about when and how to claim Social Security retirement benefits. However, these providers may have a financial disincentive to recommend an out-of-plan option, such as an annuity or rollover to other plans or IRAs, because it is advantageous for them to have participants’ continued enrollment in their managed account service offered through a 401(k) plan. Providers have developed ways to mitigate some of this potential conflict of interest by, for example, offering advice on alternate sources of income in retirement such as TIPS. Regardless, representatives from a participant advocacy group noted that managed account providers should have little involvement in a participant’s decision about whether to stay in the managed account. As part of its responsibilities to protect plan participants under ERISA, DOL has not specifically addressed whether conflicts of interest may exist with respect to managed accounts offering additional services to participants in or near retirement. As a result, participants can be easily persuaded to stay in the managed account given the additional services offered to them by managed account providers. Additionally, the ease that these services offer could discourage managed account participants from fully considering other options, which can ultimately put them at risk of making suboptimal spend-down decisions. Some managed account providers and plan sponsors have said that increased diversification of retirement portfolios is the main advantage of the managed account service for 401(k) plan participants. Increased diversification for participants enrolled in a managed account can result in better risk management and increased retirement income compared to those who self-direct their 401(k) investments. For example, one provider’s study of managed account performance found that the portfolios of all managed account participants were believed to have been appropriately allocated, but that 43 percent of those who self-directed their 401(k) investments had equity allocations that were believed to be inappropriate for their age, and nearly half of these participants’ portfolios were improperly diversified. The advantages of a diversified portfolio include reducing a participant’s risk of loss, reducing volatility within the participant’s account, and generating long-term positive retirement outcomes. Another reported advantage of managed accounts is that they help to moderate volatility in 401(k) account performance, compared to accounts of those who self-direct their 401(k) investments. For example, in two recent reports on managed account performance, one record keeper concluded that the expanded use of professionally managed allocations, including managed accounts, is contributing to a reduction in extreme risk and return outcomes for participants, and is also gradually mitigating concerns about the quality of portfolio decision-making within defined contribution plans. Managed account providers in our eight case studies also claim that the increased personalization and more frequent rebalancing of managed accounts create an appropriately diversified portfolio that better meets a participant’s retirement goals than target date funds or balanced funds. According to these providers, periodic rebalancing combats participant inertia, one of the main problems with a self-directed 401(k) account, and the failure to update investment strategies when financial circumstances change over time. Several managed account providers told us that another advantage of managed accounts is the tendency for participants to save more for retirement compared to those who are not enrolled in the service. For example, in a study of managed accounts, a provider reported that participants in plans for which this provider offers the service contributed $2,070 more on average in 2012 than participants who self-directed investments in their 401(k) accounts (1.9 percent of salary more in contributions on average than participants who self-direct 401(k) investments). This provider noted that managed account participants are better at taking advantage of their plan’s matching contribution than participants who self-direct their 401(k) investments. For example, they found that 69 percent of managed account participants contributed at least to the level of the maximum employer matching contribution, while only 62 percent of participants who self-directed investments contributed to this level. This provider said that communication with managed account participants can lead to increased savings rates when they are encouraged to increase savings rates by at least 2 percentage points and to save at least to the point where they receive the full employer match, if such a match exists. Another service provider told us that it offers an online calculator that managed account participants can use to understand their retirement readiness. The provider also said that participants who use the calculator can see how increased savings can lead to improved retirement outcomes and will often increase their savings rate into their managed account. Retirement readiness statements received by participants who are enrolled in a managed account are another reported advantage of the service. Participants generally receive retirement readiness statements that can help them assess whether they are on track to reach their retirement goals, and the statements generally contain information about their retirement investments, savings rate, asset allocations, and projected retirement income. These statements help participants understand the likelihood of reaching their retirement goals given their current investment strategy and whether they should consider increasing their savings rates or changing risk tolerances for their investments. In some cases, these statements may provide participants with their first look in one document at the overall progress they are making toward their retirement goals. As shown in table 3, our review of three providers’ statements shows that they use different metrics on participant readiness statements to evaluate participants’ retirement prospects. For example, each statement provided participants with information on their retirement goals and risk tolerance, and a projection of their future retirement income to demonstrate the value of the service. Similar advantages, however, can be achieved through other retirement investment vehicles outside of a managed account and without paying the additional managed account fee. For example, in one recent study, a record keeper that offers managed accounts through its platform showed that there are other ways to diversify using professionally managed allocations, such as target date funds, which can be less costly. Although managed account providers may encourage participants to save more and review their progress towards achieving a secure retirement, participants still have to pay attention to these features of the managed account for it to provide value. Even if 401(k) plan participants are not in managed accounts, we found that in some instances they can still receive advice and education from a provider in the form of retirement readiness statements. The additional fee a participant generally pays for a managed account was the primary disadvantage mentioned by many industry representatives, plan sponsors, and participant advocates. Because of these additional fees, 401(k) plan participants who do not receive higher investment returns from the managed account services risk losing money over time. Some managed account providers and record keepers have reported that managed account participants earn higher returns than participants who self-direct their 401(k) plan investments, which may help participants offset the additional fee charged. For example, one provider told us that participants enrolled in managed accounts saw about 1.82 percentage points better performance per year, net of fees, compared to participants without managed accounts. Given these higher returns, this provider projects that a 25-year-old enrolled in its managed account could potentially see up to 35 percent more income at retirement than a participant not enrolled in the service, according to this provider’s calculations. Another provider reported that the portfolios of participants who were defaulted into managed accounts were projected to receive returns of nearly 1 percentage point more annually, net of fees, after the provider made allocation changes to the participants’ portfolios. However, the higher rates of return projected by managed account providers may not always be achievable. For instance, we found limited data from one record keeper that published returns for managed account participants that were generally less than or equal to the returns of other professionally managed allocations (a single target date fund or balanced fund) as shown in figure 8. We used these and other returns data published by this record keeper to illustrate the potential effect over 20 years of different rates of return on participant account balances. On the lower end, this record keeper reported that, over a recent 5-year period, 25 percent of its participants earned annualized returns of -0.1 percent or less, not even making up the cost of the additional fee for the service. On the higher end, the record keeper reported that, over a slightly different 5-year period, 25 percent of its participants earned annualized returns of 2.4 percent or higher for the service. These actual returns illustrate the substantial degree to which returns can vary. If such a 2.5 percentage point difference (between these higher and lower reported managed account rates of return of 2.4 percent and -0.1 percent, respectively) were to persist over 20 years, a participant earning the higher managed account rate of return could have nearly 26 percent more in their ending account balance at the end of 20 years than a participant earning the lower rate of return in their managed account. As shown in Figure 9, using these actual rates of return experienced by participants in managed accounts, such a variation in rates of return can substantially affect participant account balances over 20 years. Further, this record keeper’s published data on managed account rates of return were net of fees—rates of return would be higher if participants did not pay the additional fee for the service. For example, using this record keeper’s average fee rate in our analysis, we estimate that a hypothetical managed account participant who earned a higher rate of return of 2.4 percent will pay $8,400 more in additional fees over 20 years than a participant who self-directs investments in their 401(k) account and does not pay the additional fee. To illustrate the potential effect that fees could have on a hypothetical participant’s account balance over 20 years, we used a higher fee of 1 percent reported to us by one provider to estimate that a participant would pay $14,000 in additional fees compared to a participant who self-directs investments in their 401(k) account over the same period. However, based on the reported performance data we found, there is no guarantee that participants will earn a higher rate of return with a managed account compared to the returns for other professionally managed allocations or self-directed 401(k) accounts. The limited performance data we reviewed show that in most cases, managed accounts underperformed these other professionally managed allocations and self-directed 401(k) accounts over a 5-year period. However, managed account participants with lower rates of return still pay substantial additional fees for the service. To further illustrate the effect of fees on account balances, a hypothetical participant who earns a lower managed account rate of return of -0.1 percent would pay $6,900 in additional fees using this record keeper’s average fee over 20 years compared to a participant who self-directed investments in their 401(k) account, and the additional fees would increase to $11,500 at the 1 percent fee level using the lower rate of return. The additional managed account fees, which are charged to participants over and above investment management and administrative fees, can vary substantially, and as a result, some participants pay no fees, others pay a flat fee each year, and still others pay a comparatively large percentage of their account balance for generally similar services from managed account providers. In our case studies, we reviewed the additional fees charged to participants for the service. One managed account provider charges a flat rate and fees for the other seven providers ranged from 0.08 to 1 percent of the participant’s account balance annually, or $8 to $100 on every $10,000 in a participant’s account. Therefore, participants with similar balances but different providers can pay different fees. As shown in table 4, participants with an account balance of $10,000 whose provider charges the highest fee may pay 12.5 times as much as participants whose provider charges the lowest fee ($100 and $8, respectively). However, participants with an account balance of $500,000 may pay up to 250 times as much as other participants but one is subject to a provider who charges the highest fees while the other is at the lowest fee provider ($5,000 and $20, respectively). Participants with large account balances whose managed account provider caps fees at a certain level benefit more than similar participants whose fees are not capped. Of the providers we reviewed who charge variable fees, one provider caps the fee at a certain amount per year. For example, this provider charges 0.25 percent or $25 for every $10,000 in a participant’s account, with a maximum of $250 per year, so participants who use this provider only pay fees on the first $100,000 in their accounts. As a result, the difference in fees paid by participants using this provider, or providers who charge flat rates, widens as participant account balances increase. Plan characteristics can affect fees participants pay to managed account providers. For example, at one managed account provider included in our review, a participant in a small plan may pay more for a managed account than a similar participant in a large plan. Similarly, a participant in a plan with high enrollment or that uses managed accounts as the default may pay less for a managed account than a participant with the same balance in a plan with low enrollment or that offers managed accounts as an opt-in service. We also found through our case studies that fees can vary based on factors beyond the plan’s characteristics, such as the types of providers involved in offering the managed account, the size of participant account balances, and the amount of revenue sharing received by the managed account provider. Fees calculated through revenue sharing can vary in accordance with the investment options the plan sponsor chooses to include in the plan and the amount of revenue the provider actually receives from these options. In these cases, initial fee estimates for the managed account may differ from actual fees they pay. In addition, some plan sponsors also pay fees to offer managed account services, but since these fees may be paid out of plan assets, participants in these plans may pay more than participants in plans that do not pay fees. As shown above, paying higher additional fees to a provider for a managed account service offers no guarantee of higher rates of return compared to other providers or compared to the reported rates of return earned by participants who invest in other professionally managed allocations or who self-direct investments in their 401(k) accounts. Because the additional fee is charged to participants on a recurring basis, such as every quarter or year, the costs incurred over time by participants who use managed accounts can accumulate. We used fee data reported by managed account providers to illustrate the effect that different fees could have on a participant’s managed account balance over time. As shown in figure 10, a hypothetical participant in our illustration who is charged an additional annual fee of 1 percent of their account balance for their managed account may pay nearly $13,000 more over 20 years than they would have paid in any other investment without the managed account fee. This compares to about $1,100 in additional fees paid over 20 years by a participant who is charged an annual fee of 0.08 percent for a managed account, the lowest variable non-capped fee we found. The limited availability of returns-based performance data and lack of standard metrics can also offset the reported advantages of managed accounts. In its final rule on participant-level disclosures, DOL requires that sponsors disclose performance data to help participants make informed decisions about the management of their individual accounts and the investment of their retirement savings, and that sponsors provide appropriate benchmarks to help participants assess the various investment options available under their plan. By requiring sponsors to provide participants with performance data and benchmarking information for 401(k) investments, DOL intends to reduce the time required for participants to collect and organize fee and performance information and increase participants’ efficiency in choosing investment options that will provide the highest value. Since the applicability date of the participant-level disclosure regulation, for most plans in 2012, DOL has required plan sponsors to provide participants who invest in a “designated investment alternative” in their 401(k) account with an annual disclosure describing the fees, expenses, and performance of each of the investment funds available to them in the plan. DOL defines a designated investment alternative as “any investment alternative designated by the plan into which participants and beneficiaries may direct the investment of assets held in, or contributed to, their individual accounts.” For designated investment alternatives, plan sponsors are required to disclose to participants specific information identifying the funds available to them in the plan, results-based performance information over varying time periods, and performance benchmarks in a way that invites comparison with established benchmarks and market indexes, as shown in table 5. Despite DOL’s requirements for designated investment alternatives, with respect to managed accounts offered either as an opt-in or default service, plan sponsors are generally only required to disclose to 401(k) participants the identity of the managed account provider or investment manager and any fees and expenses associated with its management. Neither plan sponsors nor managed account providers are required to isolate within the participant-level disclosure investment-related information on the individual funds that comprise the participant’s managed account or present aggregate performance of the account for a given period. DOL generally does not consider most managed accounts to be “designated investment alternatives.” Instead, according to DOL, managed account providers are generally considered to be “designated investment managers” as they provide a service to participants rather than an investment option, such as a mutual fund. As a result, the investment–related information required in DOL’s participant-level disclosure regulation does not apply to investment services, such as many managed accounts. Because DOL does not require plan sponsors to provide participants information on the performance of their managed accounts or to compare performance against a set of standard benchmarks, it is potentially difficult for participants to evaluate whether the additional fees for managed accounts are worth paying, considering the effect of fees on returns and retirement account balances. As a result, participants may be unable to effectively assess the overall value of the service and to compare performance against a set of standard benchmarks. Not all of the retirement readiness statements we reviewed included returns-based performance data or information on the amount of additional fees the participant had paid for the service. Some managed account providers did include projections of a participant’s future retirement income on these statements. Even though the projections may be based on sound methodologies, if standard returns-based performance data are absent from these statements, participants will have to rely primarily on these projections to gauge the overall value of the service. Without performance and benchmarking information presented in a format designed to help participants compare and evaluate their managed account, participants cannot make informed decisions about the managed account service. Likewise, with respect to QDIAs, DOL only requires plan sponsors to disclose to participants a description of each investment’s objectives, risk and return characteristics (if applicable), fees and expenses paid to providers, and the right of the participant to elect not to have such contributions made on their behalf, among other things. In 2010, DOL proposed amendments to its QDIA disclosure requirements that would, with respect to target date funds or similar investments, require sponsors to provide participants historical returns-based performance data (e.g., 1-, 5-, and 10-year returns). According to DOL officials, the proposed QDIA rule change may apply to managed accounts offered as a QDIA to participants. However, the proposed requirements as written may be difficult for plan sponsors to implement because they are not tailored specifically for managed accounts. One participant advocacy group noted that, without similar information, participants may not be able to effectively assess managed account performance over time and compare that performance to other professionally managed investment options available under the plan or across different managed account providers. As mentioned above, DOL affirms in the participant-level disclosure regulation that performance data are required to help participants in 401(k) plans to make informed decisions about managing investments in their retirement accounts, and that appropriate benchmarks are helpful tools participants can use to assess the various investment options available under their plan. The benefits outlined in the participant-level disclosure regulation would also apply to the proposed changes to the QDIA regulation. Specifically, DOL expects that the enhanced disclosures required by the proposed regulation would benefit participants by providing them with critical information they need to evaluate the quality of investments offered as QDIAs, leading to improved investment results and retirement planning decisions by participants. DOL believes that the disclosures under the proposed regulation, combined with performance reporting requirement in the participant-level disclosure regulation, would allow participants to determine whether efficiencies gained through these investments are worth the price differential participants generally would pay for such funds. However, absent DOL requirements that plan sponsors use standard metrics to report on the performance of managed accounts for participants who are defaulted into the service as a QDIA, it would be potentially difficult for these participants to evaluate the effect that additional fees could have on the performance of their managed accounts, including how the additional fees could affect returns and retirement account balances, possibly eroding the value of the service over time for those participants. Improved performance reporting could help participants understand the risks associated with the additional fees and possible effects on their retirement account balances if the managed accounts underperform, which is critical information that participants could use to take action to mitigate those risks. Discussions with managed account providers suggest that returns-based performance reports and custom benchmarking can be provided to managed account participants. For example, as shown in figure 11, one managed account provider we spoke to already furnishes participants access to online reports that include returns-based performance data and custom benchmarks, which can allow them to compare performance for a given period with an established equity index and bond index. Some providers told us that it would be difficult to provide participants in managed accounts with performance information and benchmarks because their retirement portfolios contain highly personalized asset allocations. While it may be more challenging for providers to furnish performance information on personalized managed accounts compared to model portfolios, we identified one participant statement that included performance information from a provider who personalizes asset allocations for their participants’ retirement portfolios. The provider told us that the blended custom benchmark described in figure 11 allows participants to more accurately evaluate and compare the aggregate performance of the different individual funds held in their managed account because the benchmark is linked to the participant’s risk tolerance. The online report also describes any positive or negative excess returns for the portfolio relative to the return of the custom benchmark, net of fees. The provider said that the excess return statistic is representative of the value that the provider or portfolio manager has added or subtracted from the participant’s portfolio return for a given period. Another managed account provider furnishes retirement readiness statements that include returns-based information for each of the funds in participants’ accounts. However, the statement did not include standard or custom benchmarks that would allow participants to compare the performance of their managed account with other market indexes. Some sponsors report that their choice of a managed account provider may be limited to those options—sometimes only one—offered by the plan’s record keeper. Although DOL’s general guidance on fiduciary responsibilities encourages sponsors to consider several potential providers before hiring one, six of the 10 sponsors we interviewed said that they selected a managed account provider offered by their record keeper without considering other options and two other sponsors said that their record keeper’s capabilities partially restricted their choice of a provider. Some record keepers voluntarily offered sponsors more managed account provider options when sponsors asked for them. In the absence of DOL requiring sponsors to request multiple provider options, sponsors said they were reluctant to pursue options not offered by their record keeper for a variety of reasons. These reasons included: (1) concern that their record keeper’s systems might be unable to support additional options; (2) familiarity with the current provider offered by their record keeper; and (3) belief that there was no need to consider other options—one sponsor said that its record keeper has consistently provided excellent service and support for a reasonable fee and, as a result, the sponsor felt comfortable accepting the record keeper’s recommendation of the provider offered on its recordkeeping system. Without the ability to choose among multiple providers, sponsors have limited choices, which can result in selecting a provider who charges participants higher additional fees than other providers who use comparable strategies to manage participant investments, which are ultimately deducted from participant account balances. In addition, limited choices can result in sponsors selecting a provider whose strategy does not align with their preferred approach for investing participant contributions. For example, a sponsor who endorses a conservative investment philosophy for their plan could select a provider who uses a more aggressive method for managing participant investments. Several managed account providers and record keepers said that a limited number of providers are offered because, among other things, it is costly to integrate 401(k) recordkeeping systems with managed account provider systems. In addition, record keepers may offer a limited number of providers to avoid losing revenue and because they evaluate a provider before deciding to offer its managed account service. Such steps include reviewing the provider’s investment strategy and assessing how the provider interacts with participants. One managed account provider estimated that sponsors might have to spend $400,000 and wait more than a year before offering the provider’s managed account to plan participants if it is not already available on their record keeper’s system. Additionally, record keepers may lose target date fund revenue or forgo higher revenue opportunities by offering certain managed account providers and may believe that offering multiple options is unnecessary once they have identified a provider that is effective. Although sponsors may have access to a limited number of managed account providers on their record keepers’ systems, some providers have developed approaches that make it easier for record keepers to offer more than one managed account option to sponsors. For instance, one provider we interviewed, which acts as an intermediary and fiduciary, contracts with several other providers and makes all of these providers available to its record keepers, thus allowing the record keepers’ sponsors to choose among several managed account providers without incurring additional costs to integrate the record keeper with any of the providers. Another managed account provider has developed a process to transfer information to record keepers that does not require integration with the recordkeeping system, thus making it less difficult for any record keeper to work with them. Available evidence we reviewed suggests that sponsors lack sufficient guidance on how to select and oversee managed account providers. Several of the sponsors we interviewed said that they were unaware of any set list of standards for overseeing managed accounts, so they do not follow any standards, and even managed account providers felt that sponsors have insufficient knowledge and information to effectively select a provider. Because sponsors may not have sufficient knowledge and information, record keepers could play a larger role in the selection process. In addition, providers indicated that it is difficult for sponsors to compare providers and attributed this difficulty to the absence of any widely accepted benchmarks or other comparison tools for sponsors. Some industry representatives indicated that additional guidance could help sponsors better select and oversee managed account providers and highlighted specific areas in which guidance would be beneficial, such as: determining whether a managed account fee is reasonable; understanding managed accounts and how they function; and clarifying factors sponsors should consider when selecting a managed account provider. Although DOL is responsible for assisting and educating sponsors by providing them with guidance, it has not issued guidance specific to managed accounts, as it has done for target date funds, even though it has issued general guidance on fiduciary responsibilities, including regulations under ERISA 404(a) and 404(c), which explicitly state DOL’s long-standing position that nothing in either regulation serves to relieve a fiduciary from its duty to prudently select and monitor any service provider to the plan. DOL guidance on target date funds outlines the factors sponsors should consider when selecting and monitoring target date funds, such as performance and fees, among other things. The absence of similar guidance specific to managed accounts has led to inconsistency in sponsors’ procedures for selecting and overseeing providers and may inhibit their ability to select a provider who offers an effective service for a reasonable fee. Specifically, without assistance regarding what they should focus on, sponsors may not be considering factors that DOL considers relevant for making fiduciary decisions, such as performance information. For example, sponsors considered a range of factors when selecting a managed account provider, including record keeper views on the quality of the provider, the provider’s willingness to serve as a fiduciary, and the managed account provider’s investment strategy. In addition, as shown in table 6, while nearly all of the sponsors said that they considered fees when selecting a managed account provider, only 1 of the 10 sponsors we interviewed said that they considered performance information when selecting a managed account provider. In addition, only half of the sponsors we interviewed reported that they take steps to formally benchmark fees by, for example, comparing their participants’ fees to the amount of fees that participants in similarly-sized organizations pay. The extent to which sponsors oversee managed account providers also varies. Nearly all of the 10 sponsors we interviewed said that they review reports from their managed account provider or record keeper as part of their oversight process, and the managed account providers we interviewed highlighted the role that these reports play in the oversight process. Several of these providers noted that the reports they provide help sponsors fulfill their fiduciary responsibility for oversight. Most sponsors said that they also take other steps to oversee managed account providers, such as regularly meeting with them. However, only one sponsor said that, as part of its oversight activities, it independently evaluates benchmarks, such as stock market performance indexes. In addition, even though participants generally pay an additional fee for managed account services, not all of the sponsors we interviewed said that they monitor fees. Some industry representatives indicated that consistent performance information could help sponsors more effectively compare prospective managed account providers and ultimately improve selection and oversight. Similar to the challenges participants face in evaluating managed accounts because of a lack of performance information, industry representatives said that sponsors need information as well, including: useful, comparative performance information and a standard set of metrics to select suitable providers; access to standard performance benchmarks to monitor them; and access to comparable managed account performance information to evaluate performance. Some providers highlighted challenges with providing performance information on managed accounts and, as a result, furnish sponsors with other types of information to demonstrate their value to participants. For example, providers may not furnish returns-based performance information to demonstrate how their offerings have affected participants because the personalized nature of managed accounts makes it difficult to measure performance. In lieu of providing returns-based performance information, providers furnish sponsors with changes in portfolio risk levels and diversification, changes in participant savings rates, and retirement readiness. One managed account provider said that it does not believe there is a way to measure the performance of managed accounts, noting that it develops 20 to 50 investment portfolios for any given plan based on the investment options available in the plan. Nonetheless, a few providers voluntarily furnish sponsors with returns- based performance information. One provider that used broad-based market indexes and customized benchmarks noted that it would be difficult for a sponsor to select a managed account provider without being able to judge how the provider has performed in the past. In addition, this provider, unlike some other providers, noted that the personalized nature of some managed accounts does not preclude managed account providers from being able to generate returns-based performance information. For example, even though plans may differ, providers can collect information from record keepers for each of the plans that offer managed accounts and create aggregate returns data, which could then be disclosed to sponsors along with an explanation of how the data were generated. As shown in figure 12, the report that this provider distributes to sponsors contains an array of performance information for participant portfolios, including rates of return earned by the portfolios for multiple time periods and benchmarks. In addition, the report provides a description of the benchmarks—broad-based market indexes as well as customized benchmarks. DOL regulations require that service providers furnish sponsors with performance and benchmarking information for the investment options available in the plan. DOL maintains that sponsors need this information in order to make better decisions when selecting and monitoring providers for their plans. However, DOL regulations generally do not require managed account providers to furnish sponsors with performance and benchmarking information for managed accounts because, as previously noted, managed accounts are not considered to be designated investment alternatives. Without this information, sponsors cannot effectively compare different providers when making a selection or adequately determine whether their managed account offerings are having a positive effect on participant retirement savings, as they can currently determine with the designated investment alternatives available in the plan. Managed accounts can be useful services and may offer some advantages for 401(k) participants. They build diversified portfolios for participants, help them make investment decisions, select appropriate asset allocations, and estimate the amount they need to contribute to achieve a secure retirement. Given these potential advantages, it is no surprise that the number of managed account providers has grown and that plan sponsors, seeking to provide the best options for plan participants, have increasingly offered managed accounts. The extent to which managed accounts benefit participants may depend on the participant’s level of engagement and ability to increase their savings. Despite the potential advantages, better protections are needed to ensure that participants realize their retirement goals. These protections are especially important as additional fees for this service can slow or erode participants’ accumulated retirement savings over time. Helping plan sponsors understand and make appropriate decisions about managed accounts can better ensure that participants are able to reap the full advantages of managed accounts. Since plan sponsors select a managed account provider, participants who use these services are subject to that managed account provider’s structure and strategies for allocating participant assets, which can potentially affect participants’ ability to save for retirement, especially if they pay high fees. Some participants cannot be assured that they are receiving impartial managed account services or are able to rely on accountable investment professionals taking on appropriate fiduciary responsibilities. Clarifying fiduciary roles for providers who offer managed accounts to participants on an opt-in basis or for providers who offer additional services to participants in or near retirement could help ensure that sponsors have a clear understanding of provider responsibilities so they can offer the best services to their participants. DOL can also help sponsors gain clarity and confidence in selecting and monitoring managed account providers. This is particularly salient since managed accounts can be complicated service arrangements and there are considerable structural differences among the managed account options offered by providers. By requiring sponsors to request multiple provider options from their record keeper, DOL can help ensure that sponsors thoroughly evaluate managed account providers before they are offered to participants. In addition, providing sponsors with guidance that clarifies standards and suggests actions for prudently selecting and overseeing managed account providers, such as documenting their processes and understanding the strategies used in the managed account, positions sponsors to better navigate their fiduciary responsibilities. Additional guidance also positions sponsors to consider additional factors when choosing to default participants into managed accounts. Supplementing this guidance by requiring providers to furnish consistent performance information to sponsors so that they can more effectively compare providers can assist sponsors in their efforts to provide a beneficial service that could help preserve and potentially enhance participants’ retirement security. Finally, DOL can also help participants evaluate whether their managed account service is beneficial. Without standardized performance and benchmarking information, participants may not be able to effectively assess the performance of their managed account and determine whether the additional fee for the service is worth paying. For participants who opt into managed accounts, this information could help them more effectively assess the performance of their managed account and compare that performance to other professionally managed alternatives that may be less expensive, such as target date funds. Alternatively, for participants who are defaulted into managed accounts, this information could be valuable when they start to pay more attention to their retirement savings. To better protect plan sponsors and participants who use managed account services, we recommend that the Secretary of Labor direct the Assistant Secretary for the Employee Benefits Security Administration (EBSA) to: a) Review provider practices related to additional managed account services offered to participants in or near retirement, with the aim of determining whether conflicts of interest exist and, if it determines it is necessary, taking the appropriate action to remedy the issue. b) Consider the fiduciary status of managed account providers when they offer services on an opt-in basis and, if necessary, make regulatory changes or provide guidance to address any issues. To help sponsors who offer managed account services or who are considering doing so better protect their 401(k) plan participants, we recommend that the Secretary of Labor direct the Assistant Secretary for EBSA to: c) Provide guidance to plan sponsors for selecting and overseeing managed account providers that addresses: (1) the importance of considering multiple providers when choosing a managed account provider, (2) factors to consider when offering managed accounts as a QDIA or on an opt-in basis, and (3) approaches for evaluating the services of managed account providers. d) Require plan sponsors to request from record keepers more than one managed account provider option, and notify the Department of Labor if record keepers fail to do so. To help sponsors and participants more effectively assess the performance of managed accounts, we recommend that the Secretary of Labor direct the Assistant Secretary for EBSA to: e) Amend participant disclosure regulations to require that sponsors furnish standardized performance and benchmarking information to participants. To accomplish this, EBSA could promulgate regulations that would require sponsors who offer managed account services to provide their participants with standardized performance and benchmarking information on managed accounts. For example, sponsors could periodically furnish each managed account participant with the aggregate performance of participants’ managed account portfolios and returns for broad- based securities market indexes and applicable customized benchmarks, based on those benchmarks provided for the plan’s designated investment alternatives. f) Amend service provider disclosure regulations to require that providers furnish standardized performance and benchmarking information to sponsors. To accomplish this, EBSA could promulgate regulations that would require service providers to disclose to sponsors standardized performance and benchmarking information on managed accounts. For example, providers could, prior to selection and periodically thereafter, as applicable, furnish sponsors with aggregated returns for generalized conservative, moderate, and aggressive portfolios, actual managed account portfolio returns for each of the sponsor’s participants, and returns for broad-based securities market indexes and applicable customized benchmarks, based on those benchmarks provided for the plan’s designated investment alternatives. We provided a draft of this report to the Department of Labor, the Department of the Treasury, the Securities and Exchange Commission, and the Consumer Financial Protection Bureau for review and comment. The Department of the Treasury and the Consumer Financial Protection Bureau did not have any comments. DOL and SEC provided technical comments, which we have incorporated where appropriate. DOL also provided written comments, which are reproduced in appendix IV. As stated in its letter, DOL agreed with our recommendations and will consider each of them as it moves forward with a number of projects. In response to our recommendation that DOL review provider practices related to additional managed account services offered to participants in or near retirement to determine whether conflicts of interest exist, DOL agreed to include these practices in its current review of investment advice conflicts of interest, noting that such conflicts continue to be a concern. Regarding our second recommendation, to consider the fiduciary status of managed account providers when they offer services on an opt-in basis, DOL agreed to review existing guidance and consider whether additional guidance is needed in light of the various business models we described. By considering managed account service provider practices and fiduciary roles in its current efforts and taking any necessary action to address potential issues, we believe DOL will help ensure that sponsors and participants receive unconflicted managed account services from qualified managers. DOL also agreed to consider our other recommendations in connection with its current regulatory project on standards for brokerage windows in participant-directed individual account plans. We believe that this project may be a good starting point for requesting additional information and considering adjustments to those managed account services participants obtain from advisers through brokerage windows. As we noted in our report, we did not include these types of managed accounts in our review because the plan sponsor is not usually involved in the selection and monitoring of these advisers. Since participants can obtain managed account services without using a brokerage window, we encourage DOL to also consider our third and fourth recommendations outside of the context of brokerage windows. Providing guidance to sponsors for selecting and overseeing managed account providers, as suggested by our third recommendation, may help sponsors understand their fiduciary responsibilities with respect to managed accounts. Similarly, requiring plan sponsors to ask for more than one choice of managed account provider, as suggested by our fourth recommendation, could encourage record keepers to offer additional choices. By taking the steps outlined in these recommendations, DOL can help ensure that participants are being offered effective managed account services for reasonable fees. With respect to our recommendation requiring plan sponsors to ask for more than one choice of managed account provider, DOL noted that it needs to review the extent of its legal authority to effectively require plans to have more than one managed account service provider. We continue to believe that the action we suggest in our recommendation—that DOL simply require plan sponsors to ask for more than one choice of a provider, which is slightly different than how DOL has characterized it— may be an effective method of broadening plan sponsors’ choices of managed account providers. However, we agree that DOL should examine the scope of its existing authority in considering how it might implement this recommendation. Finally, DOL agreed to consider our recommendations on the disclosure of performance and benchmarking information on managed accounts to participants and sponsors in connection with its open proposed rulemaking project involving the qualified default investment alternative and participant-level disclosure regulations. We believe that DOL’s consideration of these recommendations in connection with this rulemaking project will be helpful for participants and sponsors, and encourage DOL to include managed accounts in this rulemaking. Although managed accounts are different than target date funds in multiple ways, as presented in our report, we believe that managed account providers can and should provide some level of performance and benchmarking information to sponsors—and sponsors to participants—to describe how managed accounts perform over time and the risks associated with the service. In addition, to the extent that managed accounts offered on an opt-in basis are not covered by DOL’s project, we encourage DOL to consider adopting similar changes to the participant- level disclosures for those managed accounts that are not governed by QDIA regulations. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Labor, the Secretary of the Treasury, the Chair of the Securities and Exchange Commission, the Director of the Consumer Financial Protection Bureau, and other interested parties. In addition, the report will be available at no charge on GAO’s website at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512- 7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V. Our objectives for this study were to determine (1) how service providers structure managed accounts, (2) the advantages and disadvantages of managed accounts for 401(k) participants, and (3) the challenges, if any, that plan sponsors face in selecting and overseeing managed account providers. To answer our research objectives we undertook several different approaches. We reviewed relevant research and federal laws, regulations, and guidance on managed accounts in 401(k) plans. We reviewed available documentation on the structure of managed accounts in 401(k) plans and the role of service providers, including Securities and Exchange Commission (SEC) filings of the Form ADV by 30 record keepers, managed account providers, and other related service providers. We interviewed industry representatives and service providers involved with managed accounts—including record keepers, academics, industry research firms, and participant advocacy groups—and government officials from the Department of Labor’s Employee Benefits Security Administration (EBSA), SEC, the Department of the Treasury, and the Consumer Financial Protection Bureau. To examine key issues related to how managed accounts in 401(k) plans are structured, we conducted in-depth case studies of eight selected managed account providers. Since we were unable to identify a comprehensive list of managed account providers that provide services to 401(k) plans, to select providers for case studies we first developed a list of 14 managed account providers based on discussions with two industry research firms and our own analysis of information from record keeper websites and other publicly available documentation. To assess the reliability of these data, we interviewed the two industry research firms and compared their information with the results of our analysis for corroboration and reasonableness. We determined that the data we used were sufficiently reliable for selecting managed account providers for case studies. From the list of 14 providers, we selected 10 providers based on their size, location, and legal and fee structures, from which we used eight as the basis for our case studies. According to our estimates, the eight managed account providers we included in the case studies represented over 95 percent of the managed account industry in defined contribution plans, as measured by assets under management in 2013. In conducting case studies of managed account providers, we interviewed representatives of the managed account provider and chose five providers for site visits based on their locations and size. We also reviewed publicly available documentation describing the nature of the managed account and sample reports furnished by providers, confirmed the type of information these providers consider when managing a participant’s account, and analyzed fee data furnished by managed account providers. To assess the reliability of the fee data furnished by managed account providers, we corroborated and assessed the completeness of reported fee data based on information in provider SEC filings and any other relevant documentary evidence, when possible. We determined that the data were sufficiently reliable for depicting the range and types of fees charged to sponsors and participants by providers. In addition, to further understand the different strategies and structures of managed accounts, we developed and submitted five hypothetical participant scenarios in one hypothetical plan to the eight service providers and asked them to provide example asset allocations, and advice if practical, for those participants. Seven of the eight managed account providers completed and returned asset allocations to us. See appendix II for additional detail on the development of hypothetical scenarios and results from this work. To illustrate potential performance outcomes for participants in managed accounts, we used available data on actual managed account rates of return and fees to show how managed accounts could affect 401(k) account balances over 20 years. We developed two scenarios, isolating the effects of variability in the following factors: 1. Managed account rates of return – We used annual average managed account rates of return ranging from -0.1 percent to 2.4 percent, based on published performance data. We compared the change in account balances for those managed account rates of return with the change in account balances for a 1.4 percent rate of return experienced by participants who directed their own 401(k) investments. 2. Managed account fees – We used different fee levels obtained from published reports and provider interviews ranging from a low additional annual fee of 0.08 percent to a 1 percent annual fee. We compared fee totals and ending account balances for varying fee levels with those of participants who did not pay the additional fee because they directed their own 401(k) investments. For each scenario, we held all other factors constant by assuming that the participant’s starting account balance was $17,000 and starting salary was $40,000, the salary increased at a rate of 1.75 percent per year, and the participant saved 9.7 percent of their salary each year. To the extent possible, we developed scenarios using information provided to us during interviews with industry representatives or found in published reports on managed accounts or on other economic factors. To assess the reliability of these data, we considered the reliability and familiarity of the source of the data or information and, when necessary, interviewed representatives of those sources about their methods, internal controls, and results. Based on these interviews and our review of published data, we determined that the data we used were sufficiently reliable for use in these illustrations. Because this work presents simplified illustrations of potential effects on participants over time, we used nominal dollar amounts over 20 years and did not take into account inflation or changes in interest rates. Similarly, to minimize effects of percentage growth/loss sequencing on account balances, we applied the same rates of return to each of the 20 years for each scenario. The rates of returns we used in both scenarios already incorporated different asset allocations for participants with a managed account or a self-directed 401(k) account. This work does not attempt to specify or adjust these specific asset allocations. To identify the advantages and disadvantages of managed accounts for 401(k) plan participants and any challenges sponsors face in selecting and overseeing managed account providers, we conducted semi- structured interviews with 12 plan sponsors. Our process for interviewing plan sponsors involved multiple steps, as outlined below. Since a comprehensive list of sponsors that managed accounts did not exist at the time of our review, to select sponsors for semi-structured interviews, we conducted a non-generalizable survey facilitated by PLANSPONSOR, a member organization. The survey included questions about sponsors’ 401(k) plans, such as the amount of assets included in the 401(k) plan and the number of participants in the plan, and the reasons why sponsors decided to offer, or not offer, managed accounts to 401(k) plan participants. To minimize errors arising from differences in how survey questions might be interpreted and to reduce variability in responses that should be qualitatively the same, we conducted pretests with industry representatives. Based on feedback from these pretests, we revised the survey in order to improve question clarity. PLANSPONSOR included a link to our survey in an e-mail that was sent to approximately 60,000 of its subscribers. In addition, PLANSPONSOR promoted the survey eight times over 4 weeks between June 3 and June 28, 2013. A record keeper and one industry association also agreed to forward a link to our survey to their clients and members, respectively. Fifty-seven sponsors completed our survey, and 25 of them provided contact information, indicating they were willing to speak with us. Forty- eight sponsors indicated that they offer managed accounts to their 401(k) plan participants, and 20 of these sponsors provided us with their contact information. Nine sponsors indicated that they do not offer managed accounts to their 401(k) plan participants, and five of these sponsors provided us with their contact information. We reviewed the survey responses of those sponsors willing to speak with us and selected sponsors to interview based on the following characteristics: Plan size (assets in the plan, number of participants) Managed account provider Enrollment method (Qualified Default Investment Alternative, or QDIA, vs. opt-in) Length of time sponsors have been offering managed accounts To obtain a variety of perspectives, we selected at least two sponsors with any given characteristic to the extent possible. For instance, we selected several (1) sponsors of varying sizes in terms of the amount of assets included in their 401(k) plans and the number of plan participants; (2) sponsors that use different managed account providers; and (3) sponsors that have been offering managed accounts for more than 5 years. Also, we selected one sponsor that offered managed accounts as a default option. In total, we selected 10 sponsors that offer managed accounts and 2 sponsors that do not offer managed accounts, as shown in table 7. We developed semi-structured interview questions to capture information on how sponsors learn about and select managed accounts, how they oversee managed accounts, and the advantages and disadvantages of managed accounts for participants. We developed separate questions for sponsors offering managed accounts and those not offering managed accounts. We shared the interview questions with three sponsors before we began conducting the semi-structured interviews to ensure that the questions were appropriate and understandable. We made no substantive changes to the questions based on this effort. We interviewed 10 sponsors that offer managed accounts and 2 sponsors that do not offer managed accounts. As part of our interview process, we also requested and reviewed relevant documentation from plan sponsors such as quarterly managed account reports from managed account providers or record keepers. As part of our approach for determining the advantages and disadvantages of managed accounts for 401(k) plan participants, we developed a non-generalizable online survey to directly obtain participant perspectives on managed accounts, such as the advantages and disadvantages of managed accounts for 401(k) plan participants and participants’ level of satisfaction with their managed account offering. However, we did not receive any completed responses to our survey. The survey was conducted on a rolling basis from August 1, 2013 to February 25, 2014—a link to the survey was distributed at various points in time. We conducted this performance audit from October 2012 through June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To understand the different strategies and structures of managed accounts, we developed and submitted five hypothetical participant scenarios in one hypothetical plan to the eight managed account providers chosen for our case studies. Table 8 shows basic information provided for the hypothetical participant scenarios 1, 2, and 3. Table 9 shows the additional personalized information provided to managed account providers for hypothetical participant scenarios 1 and 3. Table 10 shows some of the hypothetical plan level information we compiled for scenario development. In addition, to generate hypothetical plan information, we selected 14 hypothetical plan investment options from various asset classes, as shown in table 11. We selected these mutual funds to represent a range of asset classes and based on available information from April 2013 about whether these funds could be found in 401(k) plans. We developed the hypothetical scenarios based on data and information from industry representatives—including research firms, other industry groups, and providers—and a calculator and statistics provided by a number of government agencies. To assess the reliability of these data, we considered the reliability and familiarity of the source of the data or information and, when necessary, interviewed representatives of those sources about their methods, internal controls, and results. We determined that the data we used were sufficiently reliable for developing hypothetical participant- and plan-level scenarios. We asked all eight managed account providers chosen for our case studies to provide example asset allocations and advice, if practical, for all five hypothetical participant scenarios. Seven of the eight managed account providers completed and returned asset allocations to us for the hypothetical scenarios. Five of the seven providers who sent allocations furnished two allocations for each scenario, but each gave different reasons for doing so. One of the providers furnished two allocations for each scenario because they actively manage participant allocations given changes in market conditions and their allocations could generally range within the two extremes. Another provider furnished two allocations for each scenario assuming different initial holdings because, for that provider’s strategy, a person’s initial holdings of plan investment options influence the provider’s recommended allocations, even though both of these allocations have the same overall risk and return characteristics. In some of the figures presenting results of this work, we have included one or both of these two providers’ second allocations. For the other three providers we have chosen to only include one of their asset allocations in the figures presenting the results of this work because they did not pertain to managed account service by itself or they did not include the full services offered by the managed account. We did, however, incorporate the more general understanding we gained from these alternate asset allocations in our report findings. In addition, a number of providers’ systems required that they make certain assumptions about participants outside of the hypothetical scenario information we provided. In these cases, the assumptions they made did differ, sometimes substantially, and this may have affected their asset allocation results. For example, to generate a participant’s goal, providers used varying assumptions of a participant’s annual salary growth—from 1.5 to 3.5 percent. We did not attempt to categorize or eliminate any inconsistencies in provider strategies, but instead report their results to show the variation that a participant may experience. As shown in figure 13, the median values of all providers’ allocations show a downward trend in asset allocations to equity assets and an upward trend in asset allocations to fixed income and or cash-like assets as participants age. For each hypothetical participant, we found that providers varied widely in their recommendations of specific investment options, but participants could be similarly allocated to asset classes, such as cash and cash equivalents, equity, and fixed income. For the hypothetical 30-year-old participant, select asset allocations were presented in the report at figure 5, and all allocations to specific investment options are shown in figure 14. The results were similar for the 45 and 57-year-old hypothetical participants. Starting from an initial asset allocation of 55 percent equity and 45 percent fixed income, providers reported varying asset allocations to investment options for the 45-year-old hypothetical participant, as shown in figure 16, and allocations at the asset class level shown in figure 17. Starting from initial asset allocation of 43 percent equity and 57 percent fixed income, figure 18 shows variation in allocations to investment options for the 57-year-old hypothetical participant and figure 19 shows variation in allocations at the asset class level. Charles A. Jeszeck, Director, (202) 512-7215 or [email protected]. In addition to the individual above, Tamara Cross (Assistant Director), Jessica Gray (Analyst-in-Charge), Ted Burik, Sherwin Chapman, and Laura Hoffrey made significant contributions to this report. In addition, Cody Goebel, Sharon Hermes, Stuart Kaufman, Kathy Leslie, Thomas McCool, Sheila McCoy, Mimi Nguyen, Roger Thomas, Frank Todisco, Walter Vance, and Kathleen Van Gelder also contributed to this report.
401(k) plan sponsors have increasingly offered participants managed accounts— services under which providers manage participants' 401(k) savings over time by making investment and portfolio decisions for them. These services differ from investment options offered within 401(k) plans. Because little is known about whether managed accounts are advantageous for participants and whether sponsors understand their own role and potential risks, GAO was asked to review these services. GAO examined (1) how providers structure managed accounts, (2) their advantages and disadvantages for participants, and (3) challenges sponsors face in selecting and overseeing providers. In conducting this work, GAO reviewed relevant federal laws and regulations and surveyed plan sponsors. GAO interviewed government officials, industry representatives, other service providers, and 12 plan sponsors of varying sizes and other characteristics. GAO also conducted case studies of eight managed account providers with varying characteristics by, in part, reviewing required government filings. GAO's review of eight managed account providers who, in 2013, represented an estimated 95 percent of the industry involved in defined contribution plans, showed that they varied in how they structured managed accounts, including the services they offered and their reported fiduciary roles. Providers used varying strategies to manage participants' accounts and incorporated varying types and amounts of participant information. In addition, GAO found some variation in how providers reported their fiduciary roles. One of the eight providers GAO reviewed had a different fiduciary role than the other seven providers, which could ultimately provide less liability protection for sponsors for the consequences of the provider's choices. The Department of Labor (DOL) requires managed account providers who offer services to defaulted participants to generally have the type of fiduciary role that provides certain levels of fiduciary protection for sponsors and assurances to participants of the provider's qualifications. DOL does not have a similar explicit requirement for providers who offer services to participants on an opt-in basis. Absent explicit requirements from DOL, some providers may actively choose to structure their services to limit the fiduciary liability protection they offer. According to providers and sponsors, participants in managed accounts receive improved diversification and experience higher savings rates compared to those not enrolled in the service; however, these advantages can be offset by paying additional fees over time. Providers charge additional fees for managed accounts that range from $8 to $100 on every $10,000 in a participant's account. As a result, some participants pay a low fee each year while others pay a comparatively large fee on their account balance. Using the limited fee and performance data available, GAO found that the potential long-term effect of managed accounts could vary significantly, sometimes resulting in managed account participants paying substantial additional fees and experiencing lower account balances over time compared to other managed account participants. Further, participants generally do not receive performance and benchmarking information for their managed accounts. Without this information, participants cannot accurately evaluate the service and make effective decisions about their retirement investments. Even though DOL has required disclosure of similar information for 401(k) plan investments, it generally does not require sponsors to provide this type of information for managed accounts. Sponsors are challenged by insufficient guidance and inconsistent performance information when selecting and overseeing managed account providers. DOL has not issued guidance specific to managed accounts on how sponsors should select and oversee providers, as it has done for other funds. GAO found that the absence of guidance for managed accounts has led to inconsistency in sponsors' procedures for selecting and overseeing providers. Without better guidance, plan sponsors may be unable to select a provider who offers an effective service for a reasonable fee. In addition, DOL generally does not require providers to furnish sponsors with performance and benchmarking information for managed accounts, as it does for investments available in a plan, although some providers do furnish similar information. Without this information, sponsors cannot effectively compare providers when making a selection or determine whether managed accounts are positively affecting participants' retirement savings. Among other things, GAO recommends that DOL consider provider fiduciary roles, require disclosure of performance and benchmarking information to plan sponsors and participants, and provide guidance to help sponsors better select and oversee managed account providers. In response, DOL agreed with GAO's recommendations and will consider changes to regulations and guidance to address any issues.
In each of our audits and related investigations, we found thousands of federal contractors that owed billions of dollars of federal taxes. Specifically, In February 2004, we testified that DOD and IRS records showed that about 27,000 DOD contractors owed nearly $3 billion in federal taxes. About 42 percent of this $3 billion represented unpaid payroll taxes. In June 2005, we testified that about 33,000 civilian agency federal contractors owed over $3.3 billion in federal taxes. Over a third of the $3.3 billion represented unpaid payroll taxes. In March 2006, we testified that over 3,800 GSA contractors owed about $1.4 billion in federal taxes. About one-fifth of the $1.4 billion represented unpaid payroll taxes. Because federal contractors may do business with more than one federal agency, some federal contractors that owe tax debts may be included in more than one analysis concerning DOD, GSA, and civilian federal contractors that abuse the federal tax system. In each of our audits, we found that government contractors owed a substantial amount of unpaid payroll taxes. Employers are subject to civil and criminal penalties if they do not remit payroll taxes to the federal government. When an employer withholds taxes from an employee’s wages, the employer is deemed to have a fiduciary responsibility to hold these funds “in trust” for the federal government until the employer makes a federal tax deposit in that amount. To the extent these withheld amounts are not forwarded to the federal government, the employer is liable for these amounts, as well as the employer’s matching Federal Insurance Contribution Act contributions for Social Security and Medicare. Individuals employed by the contractor (e.g., owners or officers) may be held personally liable for the withheld amounts not forwarded and assessed a civil monetary penalty known as a trust fund recovery penalty. Willful failure to remit payroll taxes can also be a criminal felony offense punishable by imprisonment of up to 5 years, while the failure to properly segregate payroll tax funds can be a criminal misdemeanor offense punishable by imprisonment of up to a year. The law imposes no penalties upon an employee for the employer’s failure to remit payroll taxes since the employer is responsible for submitting the amounts withheld. The Social Security and Medicare trust funds are subsidized or made whole for unpaid payroll taxes by the federal government’s general fund. Thus, personal income taxes, corporate income taxes, and other government revenues are used to pay for these shortfalls to the Social Security and Medicare trust funds. Although each of our estimates for taxes owed by federal contractors was a significant amount, it understates the full extent of unpaid taxes owed by these contractors. The IRS tax database reflected only the amount of unpaid federal taxes either reported on a tax return or assessed by IRS through its various enforcement programs. The IRS database did not reflect amounts owed by businesses and individuals that have not filed tax returns and for which IRS has not assessed tax amounts due. Our analysis did not attempt to account for businesses or individuals that did not file required payroll or other tax returns or that purposely underreported income and were not specifically identified by IRS as owing the additional federal taxes. According to IRS, underreporting of income accounted for more than 80 percent of the estimated $345 billion annual gross tax gap. As result of the work we performed for the Senate Permanent Subcommittee on Investigations, Committee on Homeland Security and Governmental Affairs we made numerous recommendations to DOD and civilian agencies to improve their controls over levying payments to contractors with tax debt. Many of those recommendations have been implemented and have resulted in additional collections of unpaid tax debt. We also referred 122 contractors to IRS for further investigation and prosecution. In our previous testimonies, we discussed the results of our in-depth audits and related investigations of 122 federal contractors with outstanding tax debt. For each of these 122 federal contractors, we found instances of abusive or potentially criminal activity related to the federal tax system. Many of our case study contractors were small, closely held companies that operated in wage-based industries, such as security, weapon components, space and aircraft parts, building maintenance, computer services, and personnel services. These 122 federal contractors provided goods and services to a number of federal agencies including DOD, GSA, the National Aeronautics and Space Administration, and the Departments of Homeland Security, Justice, and Veterans Affairs. The types of contracts that were awarded to these contractors also included products or services related to variety of government functions including law enforcement, disaster relief, and national security. Most of the contractors in our case studies owed payroll taxes, with some federal tax debts dating back nearly 20 years. However, rather than fulfilling their role as “trustees” and forwarding these funds to IRS, many of these federal contractors used the funds for personal gain or to fund their contractor operations. Our investigations also revealed that some owners or officers of our case study federal contractors with unpaid taxes were associated with other businesses that had unpaid federal taxes. For example, we reported that one of our case study contractors had a 20-year history of opening a business, failing to remit taxes withheld from employees to IRS, and then closing the business, only to start the cycle all over again and incur more tax debts almost immediately through a new business. We also found that a number of owners or officers of our case study contractors had significant personal assets, including a sports team, commercial properties, multimillion dollar houses, and luxury vehicles. Several owners also gambled hundreds of thousands of dollars at the same time they were not paying the taxes that their businesses owed. Despite owning substantial assets and gambling significant amounts of money, the owners or officers did not ensure the payment of the delinquent taxes of their businesses, and sometimes did not pay their own individual income taxes. Table 1 provides summary information on 10 of our 122 case study contractors that we discussed in our previous testimonies and related reports. The following provides additional detailed information from our previous testimonies on case numbers 1, 4, and 8 summarized in table 1: Case # 1: In February 2004, we testified on a business that had nearly $10 million in unpaid federal taxes, and was contracted by DOD to provide services such as trash removal, building cleaning, and security at U.S. military bases. The contractor reported that it paid the owner a six figure income and that the owner had borrowed nearly $1 million from the business. The owner bought a boat, several cars, and a home outside the country. This contractor went out of business in 2003 after state tax authorities seized its bank account for failure to pay state taxes. The contractor subsequently transferred its employees to a relative’s business, which also had unpaid federal taxes, and continued submitting invoices and receiving payments from DOD on the previous contract. Case # 4: In June 2005, we testified on a case that involved many related companies that provided health care services to the Department of Veterans Affairs (VA). During fiscal year 2004, these related companies received over $300,000 in federal contract payments. The related companies had different names, operated in a number of different locations, and used several different Taxpayer Identification Numbers (TIN). However, they shared a common owner and contact address. At the time they were paid by VA, the businesses collectively owed more than $18 million in unpaid federal taxes—of which nearly $17 million was unpaid federal payroll taxes dating back to the mid-1990s. During the early 2000s, at the time when the owner’s business and related companies were still incurring payroll tax debts, the owner purchased a number of multimillion dollar properties, an unrelated business, and a number of luxury vehicles. Our investigation also determined that real estate holdings registered to the owner totaled more than $30 million. Case # 8: In March 2006, we testified on a GSA contractor that provided security services for a civilian agency. Our investigative work indicated that an owner of the company made multiple cash withdrawals, totaling close to $1 million, while owing payroll taxes. In addition, the company’s owner also diverted the cash withdrawals to fund an unrelated business and purchased a men’s gold bracelet worth over $25,000. The company’s owner has been investigated for embezzlement and fraud. Federal law and regulations, as reflected in the FAR, do not prohibit contractors with unpaid federal taxes from receiving contracts from the federal government. Although the FAR provides that federal agencies are restricted to doing business with responsible contractors, it does not require federal agencies to deny the award of contracts to contractors that abuse the federal tax system, unless the contractor was specifically debarred or suspended by a debarring official for specific actions, such as conviction for tax evasion. The FAR specifies that unless compelling reasons exist, agencies are prohibited from soliciting offers from, or awarding contracts to, contractors who are debarred, suspended, or proposed for debarment for various reasons, including tax evasion. Conviction for tax evasion is cited as one of the causes for debarment and indictment for tax evasion is cited as a cause for suspension. The deliberate failure to remit taxes, in particular payroll taxes, is a felony offense, and could result in a company being debarred or suspended if the debarring official determines it affects the present responsibility of the government contractor. Most of the contractors in our case studies owed payroll taxes, for which willful failure to remit payroll taxes, a criminal felony offense, or failure to properly segregate payroll taxes, a criminal misdemeanor offense, may apply. At the time of our review, none of the 122 federal contractors described in our previous case study work were debarred from government contracts, despite conducting abusive and potentially criminal activities related to the tax system. As part of the contractor responsibility determination for prospective contractors, the FAR also requires contracting officers to determine whether a prospective contractor meets several specified standards, including determination as to whether a contractor has adequate financial resources and a satisfactory record of integrity and business ethics. However, the FAR does not require contracting officers to consider tax debt in making this determination. Because of statutory restrictions on the disclosure of taxpayer information, even if contracting officers were required to consider tax debts in contractor qualification determinations, contracting officers do not currently have access to tax debt information unless reported by prospective contractors themselves or disclosed in public records. Consequently, unless a prospective contractor consents, contracting officers do not have ready access to information on unpaid tax debts to assist in making contractor qualification determinations with respect to financial capability, ethics, and integrity. Further, contracting officers do not routinely obtain and use publicly available information on contractor federal tax debt in making contractor qualification determinations. Federal law generally does not permit IRS to disclose taxpayer information, including tax debts. Thus, unless the taxpayer provides consent, certain tax debt information generally can only be discovered from public records when IRS files a federal tax lien against the property of a tax debtor. However, contracting officers are not required to obtain credit reports. In the instances where they are obtained, contracting officers generally focus on the contractor’s credit score rather than any liens or other public information showing federal tax debts. However, while the information is available, IRS does not file tax liens on all tax debtors nor does IRS have a central repository of tax liens to which contracting officers have access. Further, the available information on tax liens may be of questionable reliability because of deficiencies in IRS’s internal controls that have resulted in IRS not always releasing tax liens from property when the tax debt has been satisfied. Federal contractors who owe tax debts have an unfair competitive advantage over contractors who pay their fair share. This is particularly true for federal contractors in wage-based industries, such as security and moving services. By not paying the employee taxes, these contractors keep their payroll tax, which is typically over 15 percent of each employee’s wages, thereby reducing the contractor’s costs. In this way, contractors who do not pay their taxes do not bear the same costs that tax compliant contractors have when competing on contracts. As a result, tax delinquent contractors can set prices for their goods and services lower than their tax compliant competitors. In March 2006, we testified that we found some GSA contractors who did not fully pay their payroll taxes who were awarded contracts based on price over competing contractors that did not have any unpaid federal taxes. Federal contractors’ tax debts were not considered in contract award decisions. For example, a GSA Schedule contractor was awarded two contracts for services related to moving office and equipment furniture. On both contracts, the contractor’s offer for services was significantly less than three competing bids on the first contract and two competing bids on the second contract. The contractor owed about $700,000 in taxes (mostly payroll taxes) while its competitors did not owe any federal taxes. The Civilian Agency Acquisition Council and the Defense Acquisition Regulations Council (councils) have proposed to amend the FAR to require prospective contractors to certify whether or not they have, within a 3-year period preceding the offer, been convicted of or had a civil judgment rendered against them for violating any tax law or failing to pay any tax, or been notified of any delinquent taxes for which they still owe the tax. In addition, the prospective contractor will be required to certify whether or not they have received a notice of a tax lien filed against them for which the liability remains unsatisfied or the lien has not been released. The proposed rule also adds the following as additional causes for suspension or debarment: delinquent taxes, unresolved tax liens, and a conviction of or civil judgment for violating tax laws or failing to pay taxes. By issuing the proposed rule on tax delinquency, the councils have acknowledged the importance of delinquent tax debts in the consideration of contract awards. The proposed rule requires offerors to certify whether they have or have not, within a 3-year period preceding the offer, been notified of any unresolved or unsatisfied tax debt or liens. Contracting officers generally cannot verify whether prospective contractors certifying that they have not received notice of unresolved or unsatisfied tax debts actually owe delinquent federal taxes, unless that information is disclosed in public records or unless the offeror provides consent for IRS to disclose its tax records. In March 2006, we testified that in one contractor file we reviewed, a GSA official did ask the prospective contractor about a federal tax lien. The prospective contractor provided documentation to GSA demonstrating the satisfaction of the tax liability covered by that lien. However, because the GSA official could not obtain information from the IRS on tax debts, this official was not aware that the contractor had other unresolved tax debts unrelated to this particular tax lien. Over the past several years, we have testified that thousands of federal contractors failed in their responsibility to pay billions of dollars of federal taxes yet continued to get federal contracts. This practice is inconsistent with the fundamental concept that those doing business with the federal government should be required to pay their federal taxes. With the serious fiscal challenges facing our nation, the status quo is no longer an option. Enhanced contractor requirements to pay their taxes would likely increase contractor tax compliance and federal revenues. Federal law seeking to achieve these objectives should provide flexibility to agencies, such as exceptions for contractors critical to national security. Due process and other safeguards should be built into the system to ensure that contractors that pay their federal taxes are not inadvertently denied federal contracts. We look forward to working with the Subcommittee on this important matter. Mr. Chairman and Members of the Subcommittee, this concludes our statement. We would be pleased to answer any questions you may have. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Since 1990, GAO has periodically reported on high-risk federal programs that are vulnerable to fraud, waste, and abuse. Two such high-risk areas are managing federal contracts more effectively and assessing the efficiency and effectiveness of federal tax administration. Weaknesses in the tax area continue to expose the federal government to significant losses of tax revenue and increase the burden on compliant taxpayers to fund government activities. Over the last several years, the Senate Permanent Subcommittee on Investigations requested GAO to investigate Department of Defense (DOD), civilian agency, and General Services Administration (GSA) contractors that abused the federal tax system. Based on that work GAO made recommendations to executive agencies including to improve the controls over levying payments to contractors with tax debt--many of which have been implemented--and referred 122 contractors to IRS for further investigation and prosecution. As requested, this testimony will highlight the key findings from prior testimonies and related reports. This testimony will (1) describe the magnitude of tax debt owed by federal contractors, (2) provide examples of federal contractors involved in abusive and potentially criminal activity related to the federal tax system, and (3) describe current law and proposed federal regulations for screening contractors with tax debts prior to the award of a contract. In our previous audits and related investigations, we reported that thousands of federal contractors had substantial amounts of unpaid federal taxes. Specifically, about 27,000 DOD contractors, 33,000 civilian agency contractors, and 3,800 GSA contractors owed about $3 billion, $3.3 billion, and $1.3 billion in unpaid taxes, respectively. These estimates were understated because they excluded federal contractors that understated their income or did not file their tax returns; however, some contractors may be counted in more than one of these groups. As part of this work, we conducted more in-depth investigations of 122 federal contractors and in all cases found abusive and potentially criminal activity related to the federal tax system. Many of these 122 contractors were small, closely held companies that provided a variety of goods and services, including landscaping, consulting, catering, and parts or support for weapons and other sensitive programs for many federal agencies including the departments of Defense, Justice, and Homeland Security. These contractors had not forwarded payroll taxes withheld from their employees and other taxes to IRS. Willful failure to remit payroll taxes is a felony under U.S. law. Furthermore, some company owners diverted payroll taxes for personal gain or to fund their businesses. A number of owners or officers of the 122 federal contractors owned significant personal assets, including a sports team, multimillion dollar houses, a high-performance airplane, and luxury vehicles. Several owners gambled hundreds of thousands of dollars at the same time they were not paying the taxes that their businesses owed. Federal law, as implemented by the Federal Acquisition Regulation (FAR), does not now require contractors to disclose tax debts or contracting officers consider tax debts in making contracting decisions. Federal contractors that do not pay tax debts could have an unfair competitive advantage in costs because they have lower costs than tax compliant contractors on government contracts. GAO's investigation identified instances in which contractors with tax debts won awards based on price differential over tax compliant contractors.
CMS calculates payment rates for each Part B drug with information on price data that manufacturers report quarterly to the agency. In reporting their price data to CMS, manufacturers are required to account for price concessions, such as discounts and rebates, which can affect the amount health care providers actually pay for a drug. The MMA defined ASP as the average sales price for all U.S. purchasers of a drug, net of volume, prompt pay, and cash discounts; charge-backs and rebates. Certain prices, including prices paid by federal purchasers, are excluded, as are prices for drugs furnished under Medicare Part D. CMS instructs pharmaceutical manufacturers to report data to CMS—within 30 days after the end of each quarter—on the average sale price for each Part B drug sold by the manufacturer. For drugs sold at different strengths and package sizes, manufacturers are required to report price and volume data for each product, after accounting for price concessions. CMS then aggregates the manufacturer-reported ASPs to calculate a national ASP for each drug category. Common drug purchasing arrangements can substantially affect the amount health care providers actually pay for a drug. Physicians and hospitals may belong to group purchasing organizations (GPO) that negotiate prices with wholesalers or manufacturers on behalf of GPO members. GPOs may negotiate different prices for different purchasers, such as physicians, suppliers of DME, or hospitals. In addition, health care providers can purchase covered outpatient drugs from general or specialty pharmaceutical wholesalers or can have direct purchase agreements with manufacturers. In these arrangements, providers may benefit from discounts, rebates, and charge-backs that reduce the actual costs providers incur. Discounts are applied at the time of purchase, while rebates are paid by manufacturers some time after the purchase. Rebates may be based on the number of several different products purchased over an extended period of time. Under a charge-back arrangement, the provider negotiates a price with the manufacturer that is lower than the price the wholesaler normally charges for the product, and the provider pays the wholesaler the negotiated price. The manufacturer then pays the wholesaler the difference between the wholesale price and the price negotiated between the manufacturer and the provider. Using an ASP-based method to set prices for Medicare Part B drugs is a practical approach compared with alternative data sources for several reasons. First, unlike AWP, ASP is based on actual transactions, making it a useful proxy for health care providers’ acquisition costs. Whereas AWPs were list prices developed by manufacturers and not required to be related to market prices that health care providers paid for products, ASPs are based on actual sales to purchasers. For similar reasons, payments based on ASPs are preferable to those based on providers’ charges, as charges are made up of costs and mark-ups, and mark-ups vary widely across providers, making estimates of the average costs of drugs across all providers wide-ranging and insufficiently precise. In addition, basing payments on charges does not offer any incentives for health care providers to minimize their acquisition costs. Second, ASPs offer relatively timely information for rate-setting purposes. Manufacturers have 30 days following the completion of each quarter to report new price data to CMS. Before the end of the quarter in which manufacturers report prices, CMS posts the updated Part B drug payment rates, to take effect the first day of the next quarter. Thus, the rates set are based on data from manufacturers that are, on average, about 6 months old. In comparison, rates for other Medicare payment systems are based on data that may be at least 2 years old. Third, acquiring price data from manufacturers is preferable to surveying health care providers, as the manufacturers have data systems in place that track prices, whereas the latter generally do not have systems designed for that purpose. In our survey of 1,157 hospitals, we found that providing data on drug acquisition costs made substantial demands on hospitals’ information systems and staff. In some cases, hospitals had to collect the data manually, provide us with copies of paper invoices, or develop new data processing to retrieve the detailed price data needed from their automated information systems. Hospital officials told us that, to submit the required price data, they had to divert staff from their normal duties, thereby incurring additional staff and contractor costs. Officials told us their data collection difficulties were particularly pronounced regarding information on manufacturers’ rebates, which affect a drug’s net acquisition cost. In addition, we incurred considerable costs as data collectors, signaling the difficulties that CMS would face should it implement similar surveys of hospitals in the future. Despite its practicality as a data source, ASP remains a “black box.” That is, CMS lacks detailed information about the components of manufacturers’ reported price data—namely, methods manufacturers use to allocate rebates to individual drugs and the sales prices paid by type of purchaser. Furthermore, for all but SCODs provided in the HOPD setting, no empirical support exists for setting rates at 6 percent above ASP, and questions remain about setting SCOD payment rates at ASP+6 percent. These information gaps make it difficult to ensure that manufacturers’ reported price data are accurate and that Medicare’s ASP rates developed from this information are appropriate. Significantly, CMS has little information about the method a manufacturer uses to allocate rebates when calculating an ASP for a drug sold with other products. Unlike discounts, which are deducted at the point of purchase, rebates are price concessions given by manufacturers subsequent to the purchaser’s receipt of the product. In our survey of hospitals’ purchase prices for SCODs, we found that hospitals received rebate payments following the receipt of some of their drug purchases but often could not determine rebate amounts. Calculating a rebate amount is complicated by the fact that, in some cases, rebates are based on a purchaser’s volume of a set, or bundle, of products defined by the manufacturer. This bundle may include more than one drug or a mixture of drugs and other products, such as bandages and surgical gloves. Given the variation in manufacturers’ purchasing and rebate arrangements, the allocation of rebates for a product is not likely to be the same across all manufacturers. CMS does not specifically instruct manufacturers to provide information on their rebate allocation methods when they report ASPs. As a result, CMS lacks the detail it needs to validate the reasonableness of the data underlying the reported prices. In addition, CMS does not require manufacturers to report details on price data by purchaser type. Because a manufacturer’s ASP is a composite figure representing prices paid by various purchasers, including both health care providers and wholesalers, CMS cannot distinguish prices paid by purchaser type—for example, hospitals compared with other institutional providers, physicians, and wholesalers. In particular, to the extent that some of the sales are to wholesalers that may subsequently mark up the manufacturer’s price in their sales to health care providers, the ASP’s representation of providers’ acquisition costs is weakened. Thus, distinguishing prices by purchaser type is important, as a central tenet of Medicare payment policy is to pay enough to ensure beneficiary access to services while paying pay no more than the cost of providing a service incurred by an efficient provider. In our 2005 report on Medicare’s proposed 2006 SCOD payment rates, we recommended that CMS collect information on price data by purchaser type to validate the reasonableness of ASP as a measure of hospital acquisition costs. Better information on manufacturers’ reported prices—for example, the extent to which a provider type’s acquisition costs vary from the CMS- calculated ASP—would help CMS set rates as accurately as possible. For most types of providers of Medicare Part B drugs—physicians, dialysis facilities, and DME suppliers—no empirical support exists for setting rates at 6 percent above ASP. In the case of HOPDs, a rationale exists based on an independent data source—our survey of hospital prices—but the process of developing rates for SCODs was not simple. In commenting on CMS’s proposed 2006 rates to pay for SCODs, we raised questions about CMS’s rationale for proposing rates that were set at 6 percent above ASP. CMS stated in its notice of proposed rulemaking that purchase prices reported in our survey for the top 53 hospital outpatient drugs, ranked by expenditures, equaled ASP+3 percent on average, and these purchase prices did not account for rebates that would have lowered the product’s actual cost to the hospital. We noted that, logically, for payment rates to equal acquisition costs, CMS would need to set rates lower than ASP+3 percent, taking our survey data into account. In effect, ASP+3 percent was the upper bound of acquisition costs. Consistent with our reasoning, CMS stated in its notice of proposed rulemaking that “Inclusion of … rebates and price concessions in the GAO data would decrease the GAO prices relative to the ASP prices, suggesting that ASP+6 percent may be an overestimate of hospitals’ average acquisition costs.” In its final rule establishing SCOD payment rates, CMS determined that our survey’s purchase prices equaled ASP+4 percent, on average, based on an analysis of data more recent than CMS had first used to determine the value of our purchase prices. CMS set the rate in the final rule at ASP+6 percent, stating that this rate covered both acquisition costs and handling costs. We have not evaluated the reasonableness of the payment rate established in the final rule. Lacking detail on the components of ASP, CMS is not well-positioned to confirm ASP’s accuracy. In addition, CMS has no procedures to validate the data it obtains from manufacturers by an independent source. In our 2006 report on lessons learned from our hospital survey, we noted several options available to CMS to confirm the appropriateness of its rates as approximating health care providers’ drug acquisition costs. Specifically, we noted that CMS could, on an occasional basis, conduct a survey of providers, similar to ours but streamlined in design; audit manufacturers’ price submissions; or examine proprietary data the agency considers reliable for validation purposes. HHS agreed to consider our recommendation, stating that it would continue to analyze the best approach for setting payment rates for drugs. Because ASP is based on actual transaction data, is relatively timely, and is administratively efficient for CMS and health care providers, we affirm the practicality of the ASP-based method for setting Part B drug payment rates. However, we remain concerned that CMS does not have sufficient information about ASP to ensure the accuracy and appropriateness of the rates. To verify the accuracy of price data that manufacturers submit to the agency, details are needed—such as how manufacturers account for rebates and other price concessions and how they identify the purchase prices of products acquired through wholesalers. Equally important is the ability to evaluate the appropriateness of Medicare’s ASP-based rate for all providers of Part B drugs over time. As we recommended in our April 2006 report, CMS should, on an occasional basis, validate ASP against an independent source of price data to ensure the appropriateness of ASP- based rates. Madam Chairman, this concludes my prepared statement. I will be happy to answer any questions you or the other Subcommittee Members may have. For further information regarding this testimony, please contact A. Bruce Steinwald at (202) 512-7101 or [email protected]. Phyllis Thorburn, Assistant Director; Hannah Fein; and Jenny Grover contributed to this statement. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2005, the Centers for Medicare & Medicaid Services (CMS), as required by law, began paying for physician-administered Part B drugs using information on the drugs' average sales price (ASP). Subsequently, CMS selected ASP as the basis to pay for a subset of Part B drugs provided at hospital outpatient departments. To calculate ASP, CMS uses price data submitted quarterly by manufacturers. GAO was asked to discuss its work on Medicare payment rates for Part B drugs. This testimony is based on several GAO products: Medicare Hospital Pharmaceuticals: Survey Shows Price Variation and Highlights Data Collection Lessons and Outpatient Rate-Setting Challenges for CMS, GAO-06-372 , Apr. 28, 2006; Medicare: Comments on CMS Proposed 2006 Rates for Specified Covered Outpatient Drugs and Radiopharmaceuticals Used in Hospitals, GAO-06-17R , Oct. 31, 2005; and Medicare: Payments for Covered Outpatient Drugs Exceed Providers' Costs, GAO-01-1118 , Sept. 21, 2001. Specifically, GAO's statement discusses (1) ASP as a practical and timely data source for use in setting Medicare Part B drug payment rates and (2) components of ASP that are currently unknown and implications for Medicare rate-setting. In summary, using an ASP-based method to set payment rates for Part B drugs is a practical approach compared with methods based on alternative data sources, for several reasons. First, ASP is based on actual transactions and is a better proxy for providers' acquisition costs than average wholesale price or providers' charges included on claims for payment, neither of which is based on transaction data. Second, ASPs, which manufacturers update quarterly, offer information that is relatively timely for rate-setting purposes. In comparison, rates for other Medicare payment systems are based on data that may be at least 2 years old. Finally, using manufacturers as the data source for prices is preferable to collecting such data from health care providers, as the manufacturers have data systems in place to track prices, whereas health care providers generally do not have systems designed for that purpose. CMS lacks certain information about the composition of ASP that prompted GAO, in commenting on CMS's 2006 proposed payment rates for a subset of Part B drugs, to call ASP "a black box." Significantly, CMS lacks sufficient information on how manufacturers allocate rebates to individual drugs sold in combination with other drugs or other products; this is important, as CMS does not have the detail it needs to validate the reasonableness of the data underlying the reported prices. In addition, CMS does not instruct manufacturers to provide a breakdown of price and volume data by purchaser type--that is, by physicians, hospitals, other health providers, and wholesalers, which purchase drugs for resale to health care providers. As a result, CMS cannot determine how well average price data represent acquisition costs for different purchaser types. In particular, to the extent that some of the sales are to wholesalers that subsequently mark up the manufacturer's price in their sales to providers, the ASP's representation of providers' acquisition costs is weakened. Additionally, a sufficient empirical foundation does not exist for setting the payment rate for Medicare Part B drugs at 6 percent above ASP, further complicating efforts to determine the appropriateness of the rate. Given these information gaps, CMS is not well-positioned to validate the accuracy or appropriateness of its ASP-based payment rates.
Navy auxiliary ships provide underway replenishment to Navy combatant ships worldwide thereby allowing combatant ships to remain at sea for extended periods. These ships deliver cargo and provide services such as towing and salvage operations. Navy auxiliary ships are crewed either by active duty military personnel or civil service mariners. Those ships crewed by civil service mariners also have a small detachment of active duty Navy personnel aboard to provide communications, ordnance handling, supply support, and technical support. As of May 1997, the Navy’s auxiliary fleet consisted of 42 ships—15 oilers, 6 stores ships, 7 ammunition ships, 7 tugs, and 7 multiproduct ships. One additional multiproduct ship of a new class is currently under construction. The Navy has delegated operational control of 27 of these ships to MSC, the military’s single manager for sealift, to better support Navy fleet operations. MSC crews these 27 ships with civil service mariners. The Navy’s remaining 15 auxiliary ships are crewed by military personnel. Under current policy, the Navy will not permit the use of commercial crews on any auxiliary ships because it considers their mission purely military in nature. As of May 1997, the Navy had MSC operating 27 of its 42 auxiliary ships with civil service crews. The type and number of auxiliary ships operated by MSC with civil service crewing and the crew size for each ship are shown in table 1. This table also shows the size of the military detachment on these ships. Under current policy, the Navy will not permit any auxiliary ships to be crewed with commercial mariners. In an April 1995 letter to the American Maritime Officers union, the Under Secretary of the Navy stated that the mission of its auxiliary ships was purely military in nature and not considered commercial-type operations. Therefore, according to the Under Secretary, auxiliary ships would only be crewed with government employees, even if the use of commercial employees was cost-effective. In an April 1996 letter to the same union, the Assistant Secretary of the Navy for Research, Development, and Acquisition reiterated this policy, stating that the Navy’s auxiliary ships would be crewed by civil service mariners due to the special nature of the auxiliary ships’ operation. As of May 1997, Navy officials confirmed that this policy was still in effect. As of May 1997, the Navy was continuing to crew 15 auxiliary ships with military personnel. The types of ships are shown in table 2. The Navy plans to (1) turn over the operation of the three ammunition ships to MSC for crewing with civil service mariners and (2) decommission the five oilers in fiscal year 1999, replacing them with four oilers built to commercial standards that are currently in reduced operating status or deactivated. These latter ships would also be crewed with civil service mariners. The Navy has not decided on whether to turn over the operation of the seven multiproduct auxiliary ships to MSC. Some Navy officials believe that multiproduct ships should continue to be crewed with military crews because they are the auxiliary ships that can maintain battle group speeds and operate within the battle group formations. However, MSC officials stated that they have studied what it would take to operate the multiproduct ships and are willing to accept the transfer because they believe MSC civil service crews can operate these ships. Our work and prior studies have shown that the Navy could achieve savings by using civil service crews on auxiliary ships. According to November 1996 data, the most current available, the Navy’s annual cost to operate a multiproduct ship (AOE-1 class), built in the 1960s, is $54 million compared to MSC’s estimated cost of $37 million to operate the ship using a civil service crew. The savings of nearly $18 million are primarily attributable to differences in crew sizes. MSC operates its ships with a smaller crew because it hires skilled mariners, whereas Navy crews are often recruits that must be trained to replace more skilled sailors. The Navy operates this ship with 600 crewmembers while MSC would use about 247 crewmembers. Similar differences apply to the multiproduct ship (AOE-6 class), built in the 1990s, which is a smaller, modified version of the earlier ship. The Navy operates this ship for $48 million annually, with 580 crewmembers. MSC’s estimated cost to operate this ship is $31 million annually with 229 crewmembers. The savings of over $17 million are also primarily attributable to differences in crew sizes. The differences in annual operating costs between the Navy and MSC to operate the two classes of multiproduct ships are shown in table 3. Using the Navy’s data of the cost to operate the two classes of multiproduct ships, we estimated that if the Navy turned over the operation of the seven multiproduct ships to MSC for civil service crewing, it could save $122.5 million annually. Table 4 shows these potential savings. A fourth AOE-6 class ship is under construction at the National Steel and Shipbuilding Company in San Diego, California, and is scheduled for delivery in early 1998. If the Navy chooses to include this ship with the rest of the multiproduct ships turned over to MSC, an additional $17.1 million annually would be saved, for a total annual savings of $139.6 million. According to MSC unofficial estimates, these savings would be offset by a one-time cost of $45 million for an AOE-1 and $30 million for an AOE-6 to convert these ships to Coast Guard standards, which differ from Navy standards, that is, $180 million for all four AOE-1 ships and $120 million for all four AOE-6 ships, or $300 million for all eight ships. However, such an investment would seem advantageous considering the annual estimated savings of $139.6 million. In a 1990 study of civilian manning of auxiliary ships, the Center for Naval Analyses found that the Navy would save $265 million annually if the Navy turned over 42 support ships and tenders to MSC. The study attributed the annual savings to much smaller crew sizes on MSC ships. It reported, for example, that civil service crews on a Navy oiler would be half the crew size the Navy used on those ships. In 1993, the Institute for Defense Analyses found that the Navy could save considerable cost and personnel positions by operating more of its auxiliary ships with civil service mariners. The Institute reported that a civilian operation saves on cost by reducing the total crew size by about half for a similar ship. It concluded that the Navy could save $4 million to $15 million a year per ship, depending on the type, by reducing the number of sea-going personnel positions on auxiliary ships and crewing them with civilians. A 1994 Naval Audit Service report also found that significant cost benefits could be achieved if Navy auxiliary ships were crewed by civil service mariners. The report, which covered 45 ships, stated that by turning over the ships to MSC, crewing could be reduced 52 percent, from 19,440 crewmembers to 9,264 crewmembers. Depending on the cost method applied, the Navy could save $3.7 billion or $4.3 billion over a 5-year period. The Naval Audit Service recommended that the Navy turn over the 45 auxiliary ships to MSC for civil service crewing. Another advantage of turning over the Navy multiproduct ships to MSC is, as Navy and MSC officials pointed out, that MSC ships do not have the constraints on operating days per ship and on days at sea per crewmember that Navy ships do. It is Navy policy to assign a sailor to a ship for 3 years and not to have the sailor spend more than 6 consecutive months each year at sea, whereas MSC policy is to have MSC crews spend about 9 months out of every 12 months at sea. According to these officials, an MSC ship can operate more days per year than a comparable Navy ship—resulting in fewer MSC ships being needed to conduct underway replenishment. Further, these officials agree that additional savings could be realized because some ships could be retired, decommissioned, or deactivated. The Navy is currently conducting a study to determine whether it is more cost-effective to continue the operation of the multiproduct auxiliary ships under Navy control or turn over the operation of these ships to MSC. The objectives of the study are to (1) determine the Navy minimum crewing level, (2) compare the proposed reduced Navy crewing level with comparable MSC crewing, and (3) recommend a course of action based on a comparison of MSC and Navy crewing levels. Navy officials estimate that this study should be completed by the end of 1997. Although the Navy’s current policy is not to use commercial crews, we compared the cost of crewing auxiliary ships with commercial and civil service crews. Based on our analysis, we found that crewing with commercial mariners costs more. In addition, we calculated an increase in the merchant mariner pool that could be available to crew ready reserve fleet ships in time of conflict. Historically, the United States has relied on the private sector for combat support elements in time of war or national emergency. In 1972, a joint U.S. Navy-Maritime Administration project used the SS Erna Elizabeth to test the feasibility of using commercial mariners to conduct underway replenishment. The SS Erna Elizabeth steamed about 13,000 miles and refueled 40 ships at sea. In another 1972 test, the SS Lash Italia delivered food and other consumable items to the Sixth Fleet in the Mediterranean. During Operations Desert Shield and Storm, a contract-operated tanker, the MV Lawrence H. Giannella operated by a commercial crew, provided fuel to Navy combatant ships while at sea. To analyze the annual costs between civil service and commercial crews, we obtained crewing levels and wage rates from two commercial mariner unions and MSC for the operation of a Kaiser class oiler, the most commonly used ship in the MSC fleet. We focused on labor costs and excluded other costs from the comparison because we assumed other operation costs, such as fuel, maintenance, and the small detachment of active duty Navy personnel on board ship, would continue to be incurred regardless of who operated the ship. We estimated that the annual labor cost to operate a Kaiser class oiler with a civil service crew would be $6.562 million and the cost with a commercial crew would be $6.883 million, a difference of about $321,000, or about 5 percent. The estimate with a civil service crew was based on a crew size of 82 members, the authorized crewing level of a Kaiser class oiler. The commercial crew estimate was based on a crew size of 79 members, a size with which the two commercial mariner unions believed the mission could be accomplished. The major cost elements were wages and overtime, pension, medical, vacation, and other fringe benefits and personnel support costs. The differences between the annual labor costs of civil service and commercial crews to operate a Kaiser class oiler are shown in table 5. Our cost comparison showed that the annual base wages and overtime for civil service crews were $586,000, or 14 percent, more than the annual wages and overtime for commercial crews. In addition,the civil service pension costs were $573,000, or 214 percent, higher than commercial pension costs. The higher civil service wage and pension costs were offset by higher medical, vacation, and other fringe benefits and personnel support costs for commercial mariners, which resulted in a higher overall cost for commercial operations. Commercial mariner medical costs were $418,000 higher than civil service costs primarily because, according to a union official, commercial mariners have 100 percent of their medical insurance paid for (i.e., they make no contribution directly out of their paychecks). In contrast, civil service mariners pay a part of their medical insurance costs. Commercial vacation costs were $272,000 higher than civil service costs because, according to a union official, a commercial mariner earns 1 day off for every 3 days at sea, which translates to 1 month off after 3 months at sea. By comparison, a civil service mariner earns a maximum of 26 days a year off, which is supplemented by an additional 2 days of shore leave for 30 consecutive calendar days at sea. The commercial costs for fringe benefits and personnel support costs were $790,000 higher than civil service costs. The two major components in the commercial costs were payroll taxes and training. The difference is partially attributable to the fact that the government equivalent to payroll taxes is included in the civil service pension costs. In addition, based on the MSC cost formula, MSC would allocate less money for training. We calculated that the pool of U.S. civil service mariners would increase by about 1,700 merchant mariners if the operation of the multiproduct ships were turned over to MSC (see table 6). MSC established the size of its civil service mariner workforce at a ratio of 1.25 of the shipboard positions to be filled. This crew ratio allows operations to continue while some of the mariners take vacation, undergo training, or are out sick. We calculated that the commercial mariner pool to support shipboard positions would increase by about 2,700 to 3,400 mariners if commercial firms operated the multiproduct ships (see table 7). Each commercial mariner position is established at the ratio of from 2.0 to 2.5 of the shipboard positions. This crew ratio allows operations to continue while some of the mariners take vacation, undergo training, or are out sick. The off duty mariners could be used for the ready reserve fleet in times of conflict. Given the potential savings that could result if the Navy turned over the operation of the seven active multiproduct auxiliary ships and the one ship due for delivery in early 1998 to MSC for crewing with civil service mariners, we recommend that the Secretary of Defense direct the Secretary of the Navy to devise a detailed plan for turning over, in a timely manner, the operation of the multiproduct auxiliary ships to MSC. DOD partially concurred with our recommendation to the Secretary of Defense that the Secretary of the Navy devise a plan for turning over the operation of the remaining auxiliary ships to MSC. However, DOD noted that certain operational changes, ship retirements, and other actions affecting the fleet were under consideration and that more study should be done on this matter. Accordingly, DOD suggested that we modify our recommendation to the Secretary of Defense to direct the Navy to continue a cost-benefit analysis based on the Fleet Commanders’ concept of operations, crewing alternatives, and conversion costs, including indirect and additional costs. DOD stated that based on this analysis, the Navy would then either retain or turn over the operation of the multiproduct ships to MSC. We have retained our original recommendation in view of the substantial costs savings that are possible and the fact that our analysis is supported by three other major studies of this issue since 1990. All of these studies have consistently concluded that substantial savings can be achieved by turning over the operation of these ships to MSC and crewing them with civil service mariners. By developing a plan for a timely transfer of these assets to MSC as our recommendation suggests, the Navy can achieve substantial savings that might then be applied to other defense priorities. DOD’s comments are presented in their entirety in appendix I. DOD also provided technical comments, which we have incorporated where appropriate. To provide information on the Navy’s current and planned efforts to turn over the operation of military crewed auxiliary ships to MSC for civil service and/or commercial crewing, we analyzed data from and interviewed officials in the Office of the Chief of Naval Operations, MSC, the Center for Naval Analyses, commercial ship operating companies, and civilian maritime unions. To identify the potential cost savings that would be realized by turning over the operation of the Navy’s remaining military crewed auxiliary ships to MSC, we compared actual annual operating costs provided by the Navy to estimated annual operating costs provided by MSC for both classes of multiproduct ships. We then projected the savings per ship over the number of ships in each class to arrive at a total annual savings. The offsetting costs to convert the ships to Coast Guard standards were provided by MSC. We did not validate the accuracy of the cost data provided by the Navy or the cost estimates provided by MSC. However, we discussed our analysis of these costs and potential savings with the Office of the Chief of Naval Operations and MSC officials who generally agreed with the cost data used. To analyze the costs to operate MSC’s Kaiser class oiler with civil service crews and with commercial crews, we reviewed data and interviewed officials from the Maritime Administration, MSC, the American Maritime Officers union, the National Maritime Union, the Seafarers International Union, the National Marine Engineers’ Beneficial Association District #1, and the International Organization of Masters, Mates, and Pilots. We obtained crew sizes based on the Navy’s mission and manning requirements for Kaiser class oilers. We determined the annual labor cost of civil service crews by obtaining actual crewing levels and current wage rates, including overtime, from MSC. We obtained the overtime rate for the crew (the Master and the Chief Engineer do not receive overtime); vacation and sick leave; compensatory time and training costs; and pension, medical, and miscellaneous costs. To determine the annual labor costs for commercial mariners, we obtained proposed crewing levels and wage rates from two unions that represented all positions on the ship. While discussing issues with us, officials from the other commercial mariner unions declined to provide wage and crewing data. The Service Contract Act of 1965 (SCA), 41 U.S.C. §§ 351 et seq., generally provides for payment of prevailing wages when operating in U.S. territorial waters as determined by the Department of Labor for service employees under government contracts. Union officials stated that SCA was not applicable to commercial crews when operating outside U.S. territorial waters. Between May 1996 and April 1997, the Kaiser Class oilers operated in U.S. territorial waters 37 percent of the time and, thus, would come under the provisions of SCA during this period. Because the Kaiser Class oilers have been solely operated by civil service crews, the Department of Labor has not made a wage determination under SCA. To estimate the impact of operating with commercial crews, we used wage and overtime rates provided by two commercial unions for civilian crews, which is the basis for the $4,116,000 figure. If, on the other hand, commercial crews were paid the MSC rate while operating in U.S. territorial waters, total labor costs would be 5 percent higher than our estimate, assuming they operated as MSC does—about 37 percent of the time in U.S. territorial waters. However, union officials told us that they would probably operate differently, spending less time in U.S. territorial waters. We did not validate the cost data obtained from MSC or the unions. We conducted our work from April 1996 to July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Secretaries of Defense and the Navy; the Chairman of the Senate Committee on Commerce, Science, and Transportation; and other interested congressional committees. Copies will also be made available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix II. Roderick Moore The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Navy's use of alternative crewing arrangements for Navy auxiliary ships, focusing on: (1) the Navy's plans for turning over the operation of military crewed auxiliary ships to its Military Sealift Command (MSC) for civil service or commercial crewing; (2) whether cost savings would be realized if the Navy turned over the operation of the remaining military crewed auxiliary ships to MSC; (3) the relative costs of operating a Navy auxiliary ship with a civil service crew and the costs of operating the same ship with a commercial crew; and (4) the increase in the merchant mariner pool if the operation of the multiproduct ships were turned over to MSC. GAO noted that: (1) the Navy plans to turn over the operation of its remaining three ammunition ships to MSC for crewing with civil service mariners; (2) as of May 1997, the Navy had not decided on whether to turn over the operation of the remaining seven auxiliary ships as well as the single ship under construction to MSC; (3) all eight of these ships are multiproduct ships; (4) based on Navy cost data and MSC cost estimates, the Navy could save about $139.6 million annually by turning over the operation of these eight multiproduct ships to MSC for crewing with civil service mariners; (5) this savings is due primarily to a much smaller crew size than has been traditional on military crewed auxiliary ships; (6) these savings would be offset by a one-time conversion cost of $30 million to $45 million per ship, or about $300 million for all eight ships, to meet Coast Guard standards; (7) MSC might also need fewer ships to provide underway replenishment since unlike the Navy, it does not have the personnel and operating limitations on the number of operating days per ship and on days at sea per crewmember; (8) three other studies conducted since 1990 by the Center for Naval Analyses, the Institute for Defense Analyses, and the Naval Audit Service have also identified the potential for large cost savings if the Navy were to transfer additional ships to MSC; (9) these studies' projected savings were also primarily due to the smaller crew sizes on MSC ships; (10) the Navy does not intend to divert from its current policy of not using commercial mariners to crew auxiliary ships; (11) its position is that these ships must be crewed by military or civil service personnel due to their military mission; (12) however, if it were to change this policy, GAO's analysis shows that it would cost the Navy about $321,000, or about 5 percent more a year, to operate a commonly used MSC oiler ship with commercial crews than with civil service crews; (13) the difference in costs is primarily attributable to higher fringe benefit costs for commercial crews; (14) with respect to the size of the mariner pool under different crewing alternatives, GAO calculated that the pool of U.S. civil service mariners would increase by about 1,700 merchant mariners if the 8 remaining auxiliary ships were turned over to MSC and were crewed by civil service mariners; and (15) the pool of commercial merchant mariners would increase by about 2,700 to 3,400 mariners if these same ships were crewed by commercial mariners.
The Department of the Interior’s Indian education programs derive from the federal government’s trust responsibility to Indian tribes, a responsibility established in federal statutes, treaties, court decisions, and executive actions. It is the policy of the United States to fulfill this trust relationship with and responsibility to the Indian people for educating Indian children by working with tribes to ensure that these programs are In accordance with this trust of the highest quality, among other things.responsibility, Indian Affairs is responsible for providing a safe and healthy environment for students to learn. Indian Affairs oversees multiple bureaus and offices that play a key role in managing and overseeing school facilities for Indian students (see fig. 1). These bureaus and offices have several key responsibilities, including the following: The Office of the Deputy Assistant Secretary of Management oversees a number of administrative and operational functions to help Interior meet its responsibilities for designing, planning, building, and operating Indian school facilities. Specifically, the Deputy Assistant Secretary oversees the Office of Facilities, Property and Safety Management, which includes the Division of Facilities Management and Construction. This office is responsible for developing policies and providing technical assistance and funding to Bureau of Indian Affairs (BIA) regions and BIE schools to address their facility needs. Professional staff in this division—including engineers, architects, facility managers, and support personnel—are tasked with providing expertise in all facets of the facility management process. The Bureau of Indian Affairs administers a broad array of social services and other supports to tribes. Regarding school facility management, BIA oversees the day-to-day implementation and administration of school facility projects through its regional field offices. Currently there are 12 regional offices that report to the BIA Deputy Bureau Director of Field Operations. Nine of these regions have facility management responsibilities, which include performing school inspections to ensure compliance with regulations and providing technical assistance to BIE-operated and tribally-operated schools on facility issues. The Bureau of Indian Education oversees various educational functions, including funding and operating BIE schools. Three Associate Deputy Directors report to the Deputy Director of School Operations and are responsible for overseeing multiple BIE education line offices that work directly with schools to provide technical assistance, including on facility matters. Some line offices have their own facility managers, and many schools—both BIE-operated and tribally operated—also have their own facility managers or other staff who perform routine maintenance and repairs. Indian Affairs collects and tracks school condition data related to facility deficiencies, capital improvements, or construction for specific inventory items, such as classrooms, sidewalks, or utility systems. It also includes information on school facility repair needs—commonly referred to as the facilities deferred maintenance backlog—which are entered into an automated information system known as the Facilities Management Information System (FMIS). Responsibility for data entry into FMIS is shared by Indian Affairs staff, school personnel, and an Indian Affairs’ contractor who conducts inspections of school facilities. Indian Affairs uses a multilevel review process to examine the accuracy and completeness of backlog information in FMIS. In this process, each entry that school facility managers propose to add to the backlog list is reviewed and approved by several levels within Indian Affairs, including BIA agencies and regional offices, the Indian Affairs’ facility condition assessment contractor, and with final approval by the Division of Facilities Management and Construction. Indian Affairs uses approved backlog information to make funding decisions regarding school facilities. Backlog repair projects are prioritized based on health and safety risks, among other factors. Indian Affairs also has various funding categories, including emergencies and minor improvements. Once funding for school construction and repair is approved, Indian Affairs offers three main project management options. Tribes and/or schools may choose to (1) have Indian Affairs manage the project, (2) manage the project based on a contract received from Indian Affairs, or (3) in the case of tribally- operated schools, manage the project based on a grant received from Indian Affairs. Over the past four decades, we have conducted a body of work on challenges related to Indian education, including longstanding issues regarding Indian Affairs’ management of school facilities. Our work on BIE school facilities conducted in 1997 and 2003 highlighted the poor conditions of Indian schools and the need for more reliable national data to assess the condition of school facilities. Interior’s Inspector General and others have also reported similar issues, including health and safety hazards at BIE schools. Our past work and other research pointed to a variety of persistent challenges Indian Affairs has encountered in maintaining complete and accurate data on the condition of BIE school facilities. For example, in 2003 we reported on inaccurate and incomplete data entry by school officials, ineffective agency guidance, limited training in using FMIS, and agency staff not being held accountable for ensuring data integrity. Similarly, in 2011, the No Child Left Behind School Facilities and Construction Negotiated Rulemaking Committee, which the Secretary of the Interior was required to establish under the No Child Left Behind Act of 2001 (NCLBA), also identified problems with the quality of FMIS data on BIE school facilities.of school-level expertise in using FMIS, inadequate training, unreliable access to FMIS, and infrequent data validation of deficiencies by Indian Affairs’ contractor, among other issues. Further, the Committee reported that no Indian Affairs staff were tasked with monitoring schools’ use of FMIS to ensure that school officials were entering backlog items and, if not, to provide them with technical assistance. As a result, the Committee reported that problems using FMIS at many BIE schools were unresolved, schools did not know where to turn for assistance, and data entry across schools was inconsistent. The Committee attributed the problems to a lack Our ongoing work suggests that issues with the quality of data on school conditions—such as inconsistent data entry by schools and insufficient quality controls—continues to make it difficult to determine the actual number of schools in poor condition, which impedes Indian Affairs’ ability to effectively track and address school facility problems. For example, while Indian Affairs has a multilevel review process for examining the accuracy and completeness of backlog entries, we found that it does not routinely monitor whether schools are entering complete data on their facilities. For instance, an Indian Affairs internal control review of FMIS in 2010 identified inadequate controls for determining if and when all identified safety deficiencies are addressed by schools because no Indian Affairs office takes responsibility to ensure that such deficiencies are addressed by schools. According to the 2010 review, without this information Indian Affairs cannot identify and prioritize for funding for these critical deficiencies. Indian Affairs officials told us that this issue continues to be a significant challenge to FMIS data quality. We also found that some schools we visited encountered obstacles to data entry. For example, officials at one BIE-operated school noted that they did not routinely enter information into FMIS because staff lacked expertise and Indian Affairs did not provide them adequate training. As a result, they said existing information on their facilities in FMIS significantly understates their actual repair needs. According to a BIA regional officer, frequent turnover among facility staff, especially at tribally-operated schools, can exacerbate this gap in FMIS expertise. Schools can also face difficulties gaining or maintaining access to FMIS. For example, officials with one tribally operated school told us they encountered persistent problems with connecting and maintaining access to FMIS, sometimes limiting their use of the system to about 5 minutes at a time. Interior’s Inspector General has recently found similar challenges with data entry at several other schools, and it continues to monitor this issue. According to Indian Affairs officials, the last centralized training on using FMIS was held in 2012. While Indian Affairs uses a contractor to address some data quality issues by validating deficiencies on schools’ deferred maintenance backlogs and facility inventories, our ongoing work has found that the scope and frequency of their assessments are limited. According to Indian Affairs officials, the contractor is supposed to assess the conditions of schools by performing a visual inspection of each school once over a 3-year cycle, and inspections are grouped by region. One BIA regional official told us that in his region one field inspector was sent to conduct an onsite inspection and noted that a single inspector may not be capable of assessing a school’s facilities because they may contain multiple systems—such as heating/cooling, and fire alarm and suppression systems—that require specialized expertise to assess. Officials also reported that Indian Affairs policy is for the contractor not to assess schools in a particular 3-year cycle if they are about to be replaced or undergoing major construction. At one school we visited, which had not been assessed in 5 years because of ongoing construction, we found problems with both older and newly constructed buildings, such as leaking roofs. Also, Indian Affairs’ contractor is responsible for reviewing and updating information on school facility inventories during onsite inspections. However, one school facility manager suggested that the contractor’s inspections may be too short for a thorough and accurate inventory of all buildings and systems. In 2012, Indian Affairs began an effort to identify and correct inaccuracies in schools’ backlog and inventory data to respond to the findings and recommendations of the No Child Left Behind Negotiated Rulemaking Committee’s 2011 report. Further, Interior is currently moving all school facility data from FMIS to a new Indian Affairs facility information management system based on Maximo, which is required by the agency Officials said that through this data cleanup for all departmental offices.effort, they have identified and eliminated duplicate backlog deficiencies in FMIS, and they noted that Maximo will simplify data entry. However, these officials also noted FMIS constitutes a one-stop shop for managing school facility data, and that Maximo lacks several key functions that exist in FMIS, such as project management and budget execution, among others. They said they plan to add new applications to Maximo to work around some of these limitations. Additionally, one BIA regional official said that Maximo could be cumbersome to use and will require schools to use multiple new systems. Indian Affairs has provided some training on Maximo for schools, but officials indicated that there are currently few active users in part because of frequent turnover among school staff, and requests for facility funding are not yet able to be made in Maximo. Our preliminary results suggest that Indian Affairs’ data cleanup efforts and shift to Maximo will not address key challenges with school facility data, including barriers to data entry at some schools and inadequate data quality controls. As we have previously stated, incorrect and inconsistent data undermines management of the federal government’s real property assets. Federal agencies should improve the quality of their data to document performance and support decision making. Further, the National Forum on Education Statistics has stated that quality data are important for making informed decisions about school facilities. We believe that inaccurate and incomplete data will continue to hinder Indian Affairs’ ability to identify and prioritize schools’ repair and improvement needs and effectively target limited funds. This may also worsen existing conditions at some schools and may lead to greater future costs and degraded environments that negatively affect the education of BIE students. During our ongoing work, we visited schools in three states that reported facing a variety of facility-related challenges, including remoteness of their locations, aging buildings and infrastructure, limited funding, and problems with the quality of new construction, which we believe could affect their ability to provide safe, quality educational environments for students. Several of the schools we visited during our ongoing review were located in remote, rural areas and a few encountered obstacles in maintaining their own infrastructure, such as water systems or electrical utilities. For example, the facility manager at one school described an antiquated water system that is costly to maintain and repair, does not generate enough water pressure to fill the school’s water tower and cannot be used effectively to fight fires. As we have previously reported, BIE schools tend to be located primarily in rural areas and small towns and serve American Indian students living on or near reservations.that because of their isolation, these schools tend to have more extensive In particular, we found infrastructure needs than most public schools—including their own water and sewer systems, electric utilities, and other important services that are generally provided to public schools by municipalities—and maintaining them can be a considerable drain on schools’ resources. Several schools we visited during our ongoing review faced challenges with aging facilities and related systems. For example, at one school built in 1959 we observed extensive cracks in concrete block walls and supports, which a local BIA agency official said resulted from soft marsh soil and a shifting foundation. According to school officials, two of their boilers are old, unreliable, and costly to maintain, and sometimes it is necessary to close the school when they fail to provide enough heat. According to school officials, these systems also reflect 1950s technology, so the costs to maintain them are high. Staff told us they also have difficulty acquiring parts for these systems and, in some cases, fabricate work-around parts to replace outmoded parts that wear out or break. School and regional BIA officials considered the boilers to be safe, but a BIE school safety specialist reported that the conditions of the school’s boilers were a major health and safety concern. (See fig.2.) At another school, we observed a dormitory for elementary school students built in 1941 with cramped conditions, no space for desks, poor ventilation, and inadequate clearance between top bunks and sprinkler pipes in sleeping areas. School officials noted that students had received head injuries from bumping their heads on the pipes and some students had attempted suicide by hanging from them. (See fig. 3.) In some cases, we found that schools with older buildings did not have adequate systems for ensuring student health and safety. For example, facility staff at one tribally operated school showed us an aging telecommunications relay panel that they said did not allow phone calls between dormitory floors and other buildings, making communication difficult in the event of a campus-wide emergency, such as a fire or security issue. At another school, staff showed us exterior doors to campus buildings that did not lock properly, and as a result, needed to be chained during school lock downs. According to officials at the school, about 90 percent of building entrances also lacked exterior security cameras, and some buildings, such as student dormitories, had none. These challenges were highlighted during our visit when the school had to perform a lock down when a student made a Columbine-type threat. During our ongoing work, some school officials told us that they receive less than their current estimated funding needs for facility operations, which include fixed-cost items like fuel and electricity. For example, one school official told us that facility operations were funded at about 50 percent of the school’s need. Such shortfalls in operations funds can require a school to draw from its maintenance funds to keep the lights on and buildings warm in the winter, leaving less money for building maintenance. For example, officials with one school told us they may defer maintenance or cut back maintenance staff if they do not have enough funds for their operations and maintenance. Officials with several schools noted using funds for educational purposes on facility operations. Deferring maintenance can lead to bigger problems with school facilities. For example, an official with a BIE education line office pointed out that the poorly maintained rain spouts on one building of a BIE-operated school led to water collecting behind the retaining wall, resulting in separation between the sidewalk and the building. Over time, this water intrusion may undermine the foundation. (See fig. 4.) In 2008, we reported that federal agencies’ backlogs represent a fiscal exposure that may have a significant effect on future budget resources.Further, the 2011 Negotiated Rulemaking Committee report observed that without enough maintenance funds, schools’ maintenance needs go unmet, deferred maintenance grows, the quality of the physical plant deteriorates far more rapidly than it should, and the cost of repairs increases. According to the 2011 report, over decades, shortchanging spending on building maintenance degrades learning environments, shortens the overall life of school buildings, and results in increased costs for the federal government to fix these schools. (See fig. 5.) During our ongoing review, several of the schools we visited reported encountering problems with new construction. For example, officials at three schools said they encountered leaks with roofs installed within the past 11 years. According to officials at one school, despite two replacements, the roof of their gymnasium—completed in 2004— continues to leak. Officials said the company that built the gymnasium has since filed for bankruptcy. Other construction problems at the school included systems inside buildings as well as building materials. For example, in the cafeteria’s kitchen at this BIE-operated school, a high voltage electrical panel was installed next to the dishwashing machine, which posed a potential electrocution hazard. School facility staff told us that although the building inspector and project manager for construction approved this configuration before the building opened, safety inspectors later noted that it was a safety hazard. (See fig. 6.) Officials at an elementary school we visited also reported problems with new construction. School officials noted that the heat pumps in their new school facility did not have the capacity to adequately heat the building, leading to cold classrooms and frequent pump failures in the winter months. They also noted that the construction did not include a backup generator, creating a risk of freezing pipes during winter power failures. After our visit, school officials reported that a large concrete fragment fell from the upper wall of a kindergarten classroom in new school building. The classroom was unoccupied at the time. Preliminary results from our work indicate that key challenges at Indian Affairs are impeding effective management of BIE school facilities. These challenges include limited staff capacity, inconsistent accountability, and poor communication. These findings are consistent with our prior BIE work, in which we found that Indian Affairs had similar challenges overseeing BIE schools in other areas, such as in financial management and workforce planning. Given Indian Affairs’ school facility management challenges, a few schools in one region have developed their own facility management program to ensure their needs were met. Our ongoing work suggests that the capacity of BIA regional facilities and BIE school staff to address school facility needs is limited due to steady declines in staffing levels, gaps in technical expertise, and limited institutional knowledge. BIA regional officials and school officials we interviewed noted significant challenges with staff capacity. In addition, our prior work and other studies have cited the lack of capacity of Indian Affairs’ facility staff as a longstanding agency challenge. BIA Regions. Staff in certain regions told us that they have experienced declining staffing levels for over a decade, despite key responsibilities in overseeing BIE school construction and repair projects as well as supporting schools with technical assistance. Our preliminary analysis of Indian Affairs data shows that about 40 percent of regional facility positions are currently vacant, including regional facility managers, architects, and engineers who typically serve as project managers for school construction. In one BIA region serving over 15 BIE schools— along with additional Indian Affairs’ facilities such as detention centers—the regional facility staff has decreased by about half in the past 15 years, according to the regional facility manager. As of December 2014, two project managers were tasked with overseeing a growing workload of construction projects, among other duties, and only one was a licensed professional, according to the regional facility manager. The regional facility manager also noted gaps in internal staff expertise, such as not having a mechanical engineer on staff to review designs or external engineers’ assessments of systems such as heating and air conditioning. Regional staff said that hiring an in-house boiler inspector would allow them to conduct more frequent inspections and may cost less than hiring contractors to do so. Without staff with particular construction expertise, several Indian Affairs officials said that they have increasingly relied on outside contractors. As we have previously reported, risks to the federal government of extensive reliance on contractors include not building institutional expertise as well as a reduced federal capacity to manage the costs of contractors and to ensure achievement of program outcomes. Schools. Officials at several schools we visited said they face similar capacity challenges. For example, we visited an elementary school with one full-time employee for facility maintenance, along with one part-time assistant. A decade ago, the school had about six maintenance employees, according to school officials. As a result of the staffing decrease, school officials said that facility maintenance staff may sometimes defer needed maintenance. Leading facility management practices emphasize the importance of having managers with sufficient technical expertise. Staff capacity is important because the appropriate geographic and organizational deployment of employees can further support organizational goals and strategies and enable an organization to have the right people, with the right skills, doing the right jobs, in the right place, at the right time. However, we have previously reported that limited staff capacity at Indian Affairs impedes its oversight and support of BIE schools and that this runs counter to effective human capital practices. recommended that Indian Affairs revise its strategic workforce plan. Specifically, we recommended that Indian Affairs revise its strategic workforce plan to ensure that employees providing administrative support to BIE have the requisite knowledge and skills to help BIE achieve its mission and are placed in the appropriate offices. Indian Affairs agreed to implement the recommendation but has not yet done so. Our preliminary results suggest that Indian Affairs has not provided consistent oversight of some school construction projects, including projects it managed itself and projects managed by tribes. According to Indian Affairs and school officials we interviewed, some recent construction projects, including new roofs and buildings, have gone relatively well, while others have faced numerous problems. The problems we found with construction projects at some schools suggest that Indian Affairs is not fully or consistently applying management practices to ensure contractors perform as intended. For example, at one BIE-operated school we visited, Indian Affairs managed a project in which a contractor completed a $3.5 million project to replace roofs in 2010, but the roofs have leaked since their installation, according to agency documents. These leaks have led to mold in some classrooms and numerous ceiling tiles having to be removed throughout the school. (See fig.7.) In 2011, this project was elevated to a senior official within Indian Affairs, who was responsible for facilities and construction. He stated that the situation was unacceptable and called for more forceful action by Indian Affairs. Despite numerous subsequent repairs of roofs, school officials and regional BIA officials told us in late 2014 that the leaks continue. They also said that they were not sure what further steps, if any, Indian Affairs would take to resolve the leaks or hold the contractors or suppliers accountable, such as filing legal claims against the contractor or supplier if appropriate. Indian Affairs and school officials identified another recent construction project that has faced problems. At a tribally-operated school we visited in South Dakota, the school managed a project to construct a $1.5 million building for maintenance and bus storage. According to these officials, although the project was nearly finished at the time of our visit, Indian Affairs, the school, and the contractor still had not resolved various issues, including drainage and heating problems. Further, part of the new building for bus maintenance has one hydraulic lift, but the size of the building does not allow a large school bus to fit on the lift when the exterior door is closed because the bus is too long. Thus, staff using the lift would need to maintain or repair a large bus with the door open, which is not practical in the cold South Dakota winters. (See fig. 8.) According to Indian Affairs officials, part of the difficulty with this project resulted from the tribally-operated school’s use of a contractor responsible for both the design and construction of the project, which limited Indian Affairs’ ability to oversee it. Indian Affairs officials said that this arrangement, known as “design-build,” may sometimes have potential advantages such as faster project completion, but may also give greater discretion to the contractor responsible for both the design and construction of the building. For example, Indian Affairs initially raised questions about the size of the building to store and maintain buses. However, agency officials noted that the contractor was not required to incorporate Indian Affairs’ comments on the building’s design or obtain its approval for the project’s design, partly because Indian Affairs’ policy does not appear to address approval of the design in a “design-build” project. Further, neither the school nor Indian Affairs used particular financial incentives to ensure satisfactory performance by the contractor. Specifically, the school already paid the firm nearly the full amount of the project before final completion according to school officials, leaving it little financial leverage over the contractor. If problems persist with building construction, one accountability mechanism is to retain a portion of a project payment. However, certain Indian Affairs officials held conflicting views on whether withholding project payments—known in the industry as retainage—is suitable to hold contractors accountable for satisfactory completion of school construction projects. For example, officials with the Division of Facilities Management and Construction told us they usually retain 10 percent of payments until an independent inspection of a construction project has been conducted. However, officials in one BIA region said that the region tends not to use this mechanism for school construction, due partly to past practice. In prior work, we have found that retainage can be a strong motivator to encourage contractor and subcontractor performance. Although the applicability of such project accountability mechanisms may vary in amount and may depend on the particular situation or project, we have found that the federal government can be protected from poor quality construction if it appropriately uses the various tools at its disposal to manage and address problems. Our preliminary results also suggest that unclear lines of communication and confusion among BIE schools about the roles and responsibilities of the various Indian Affairs’ offices responsible for facility issues hamper efforts to address school facility needs. For example, the offices involved in facility matters continue to change, due partly to two ongoing re- organizations of BIE, BIA, and the Division of Facilities Management and Construction over the past 2 years. BIE and tribal officials at some schools we visited said they were unclear about what office they should contact about facility problems or to elevate problems that are not addressed. At one school we visited, a BIE school facility manager submitted a request for funding by February 2014 for a needed repair in the Facilities Management Information System (FMIS) to replace a water heater so that students and staff would have hot water in the elementary school. However, the school did not designate this repair as an emergency. Therefore, BIA facility officials told us that they were not aware of this request until we brought it to their attention during our site visit in December 2014. Even after we did so, it took BIE and BIA officials over a month to approve the purchase of a new water heater, which cost about $7,500. As a result, students and staff at the elementary school went without hot water for about a year. Another communication challenge that our ongoing work has identified for all BIE schools and BIA regions is that BIE last updated its directory in 2011, which contains contact information for BIE and school officials. This may impair communications, especially given significant turnover of BIE and school staff. As a result, we believe that school and BIA officials may not be able to share timely information with one another, which would affect schools’ funding levels and priorities for repairs. For example, in one BIA region we visited, officials have experienced difficulty reaching certain schools by email and sometimes rely on sending messages by fax to obtain schools’ priorities for repairs. This situation is inconsistent with federal internal control standards that call for effective internal communication throughout an agency. These preliminary findings are consistent with findings from our past work in 2013, when we testified and reported on communication challenges impeding effective operation of BIE schools. Specifically, at that time we found that several officials at schools and BIE seemed confused about whom to consult or make requests for assistance about school facilities.In addition, we found that unclear communication undermined other aspects of school operations, such as annual testing of students. Thus, at that time we recommended that Indian Affairs develop a communication strategy for BIE to inform its schools and key stakeholders of critical developments that impact instruction in a timely and consistent manner to ensure that BIE school officials receive information that is important for the operation of their schools. In early 2014, BIE developed a draft communication plan, but it has not yet been finalized, and it does not specifically address communication about school facility issues. More recently, Indian Affairs officials indicated to us that it does not plan to finalize its communication strategy until mid-2016 given that the organizational changes resulting from the two re-organizations since 2013 have not been fully implemented. While we recognize that the re- organizations have led to substantial changes in the roles and responsibilities of offices within Indian Affairs, we continue to believe that Indian Affairs needs a strategy to improve communication with BIE schools, especially given schools’ confusion about which offices to contact about facilities, and other issues. During our ongoing work, we identified an alternative program that some schools developed to ensure their facility needs were met given Indian Affairs’ facilities management challenges. Four tribally-operated schools in one region created their own facilities management program because according to program officials, they were dissatisfied with the amount of time it took BIA to complete facilities-related projects, including a building project that officials said took about 7 years to complete. They also said that they were frustrated that their input was not always solicited on proposed improvements to their facilities. Consequently, in 1997, the four schools—in conjunction with their tribal stakeholders—formed the Eastern Oklahoma Tribal Schools Facilities Management Program, a non-profit consortium of tribally-operated schools in Eastern Oklahoma, to meet their facility needs. According to program officials, its operations are financed primarily through administrative fees for project management services added to the schools’ backlog items, which are reimbursed by Indian Affairs. Currently, the program is comprised of three professional staff—two architects and a production technician—who maintain in-house technical expertise and manage construction, project design and oversight for the schools. In addition, program officials said that they routinely enter backlog data in FMIS because schools typically do not have the time, technical expertise, or capacity to do it themselves. An official with Indian Affairs’ Division of Facility Management and Construction told us that Eastern Oklahoma Tribal Schools Facilities Management Program reflects a promising approach to managing facilities, but Indian Affairs has not taken steps to disseminate information on this approach among schools. In our ongoing work, we plan to further review this approach and any others to determine how and whether Indian Affairs can leverage any promising practices to help address systematic school facility management challenges. - - - - - In conclusion, the federal government, through the Department of the Interior, has a trust responsibility for the education of Indian students, including building and maintaining school facilities. High quality school facilities are extremely important to ensure that Indian students are educated in a safe environment that is conducive to learning. However, for decades, Indian Affairs has been hampered by fundamental challenges in managing school facilities. In our previous work, we have also found significant weaknesses with Indian Affairs’ oversight of BIE schools in general. In addition, our preliminary work shows that Indian Affairs continues to face challenges in ensuring that critical school facility data are collected, staffing levels and technical expertise are strengthened, construction projects are appropriately designed and managed, and roles and responsibilities are clearly defined and communicated. Unless these issues are addressed, some students will continue to be educated in poor facilities that do not support their long- term success. We will continue to monitor these issues as we complete our ongoing work and consider any recommendations that may be needed to address these issues. Chairman Calvert, Ranking Member McCollum, and Members of the Subcommittee, this concludes my prepared statement. I will be pleased to answer any questions that you may have. For future contact regarding this testimony, please contact Melissa Emrey-Arras at (617) 788-0534 or [email protected]. Key contributors to this testimony were Elizabeth Sirois (Assistant Director), Edward Bodine, Lisa Brown, Lara Laufer, Matthew Saradjian, and Ashanta Williams. Also, providing legal or technical assistance were James Bennett, David Chrisinger, Jean McSween, Jon Melhus, and James Rebbe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
BIE oversees 185 elementary and secondary schools that serve approximately 41,000 students on or near Indian reservations in 23 states. In 2014, Interior's Office of the Assistant Secretary-Indian Affairs funded the operations, maintenance, construction, and repair of about 1,785 educational and dormitory buildings, which are worth an estimated $4.2 billion. Recent reports have raised concerns about the physical condition of these facilities and their effect on Indian students' educational outcomes. Several studies indicate that better school facilities are associated with better student outcomes. This testimony reports on ongoing GAO work related to the conditions of BIE schools. A full report will be issued later this year. Based on GAO's preliminary findings, this testimony focuses on: (1) what is known about the conditions of selected BIE schools and (2) the extent to which Indian Affairs effectively oversees and supports BIE school facilities. For this work, GAO is reviewing agency data and documentation, and relevant federal laws and regulations; interviewing agency officials; and has conducted site visits to schools in three states, which were selected based on their geographic diversity and other factors. Information on the physical condition of Bureau of Education (BIE) schools is not complete or accurate as a result of longstanding issues with the quality of data collected by the Department of the Interior's (Interior) Office of the Assistant Secretary-Indian Affairs (Indian Affairs). GAO's preliminary results indicate that issues with the quality of data on school conditions—such as inconsistent data entry by schools and inadequate quality controls—make determining the number of schools in poor condition difficult. These issues impede Indian Affairs' ability to effectively track and address school facility problems. While national information is limited, GAO's ongoing work has found that BIE schools in three states faced a variety of facility-related challenges, including problems with the quality of new construction, limited funding, remote locations, and aging buildings and infrastructure (see figure below). GAO's ongoing work also indicates that several key challenges at Indian Affairs are impeding effective management of school facilities. Specifically, GAO found declines in staffing levels and gaps in technical expertise among facility personnel in Indian Affairs. Further, GAO found that Indian Affairs did not provide consistent oversight of some school construction projects. At a school GAO visited, Indian Affairs managed a $3.5 million project to replace school roofs. Yet the replacement roofs have leaked since they were installed in 2010, causing mold and ceiling damage in classrooms. Indian Affairs has monitored this situation but has not addressed problems with the roofs. Indian Affairs' facility management is also hindered by poor communication with schools and tribes and confusion about whom to contact to address facility problems. Poor communication has led to some school facility needs not being met. For example, school officials submitted a request for funding to address their school's lack of hot water almost a year before GAO visited the school, but Indian Affairs facility officials were unaware of this until notified by GAO. GAO's preliminary results indicate that these persistent challenges diminish Indian Affairs' capability to oversee and support facilities and provide technical assistance to schools. They also run counter to federal internal control standards and leading practices on workforce planning and construction project accountability.
In recent years, Gallup opinion polls have indicated that the American public is concerned about crime and related violence. For example, in a 1995 poll, 27 percent of the respondents listed crime and violence as the most important problems facing the country. Polls also have suggested that tougher anticrime legislation is a top priority for the public. For instance, 80 percent of the respondents to a 1996 Gallup Poll supported life sentences for drug dealers. Congress has authorized grants to the states that support tougher sentencing policies for criminals and expanded prison construction to house the growing number of inmates. For example, in the Department of Justice’s 1996 appropriations, Congress authorized about $10.3 billion in grants to states for fiscal years 1996 through 2000 for, among other things, building or expanding correctional facilities to house persons convicted of violent crimes. According to BJS’ National Crime Victimization Survey (April 1996), there were 51 violent victimizations per 1,000 U.S. residents in 1994, which was the latest year that complete data were available. Since its inception in 1973, the survey has determined that crime rates and levels have fluctuated over extended periods. Specifically, violent crime rates leveled off between 1992 and 1994, compared with a 20-percent decline between 1981 and 1986 and a 15-percent rise between 1986 and 1991. Property crime continued a general 15-year decline. The survey did not provide any reasons for the fluctuations in crime rates and levels. Even though crime rates have fluctuated, overall crime rates in the 1990s remain substantially higher than those in the 1960s. For example, according to Uniform Crime Reports data compiled by the Federal Bureau of Investigation, the nation’s overall crime rate was about 2,000 crimes per 100,000 residents in the early 1960s compared with 5,374 crimes per 100,000 residents in 1994. Against the backdrop of these higher crime rates, there is a continuing debate over the use of incarceration as a means of addressing increasing crime. Both proponents and opponents of increasing the use of incarceration as a solution to the crime problem can cite research to support their positions. For example, proponents of increased incarceration assert that investing in new prisons will have long-term benefits of crime reduction. On the other hand, critics of increased incarceration argue that continued prison-building is wasteful and unaffordable and is unlikely to affect crime rates. In 1994, RAND issued a study of California’s “three strikes” law, which mandates sentences ranging from 25 years to life for certain three-time felony offenders. The study, which weighed crime reduction and cost, concluded that the California law, if fully implemented, will reduce serious felonies committed by adults in the state by between 22 and 34 percent below what may have occurred. The study also concluded that the reduction in crime would be achieved at an additional cost of between $4.5 billion and $6.5 billion in current dollars annually. According to the study, most of the cost increase would result from the need to build and operate additional prisons to house the inmate population, which could be expected to double as a result of sentencing under the law. A more recent RAND study indicates that some preventative measures, such as parent training and graduation incentives, could potentially reduce crime rates more cost effectively than incarceration. Federal and state prison inmate populations have been growing since 1980, reaching about 1.1 million inmates in 1995. Federal and state corrections agencies and nongovernmental forecasting organizations project that these populations will continue to grow, potentially reaching 1.4 million inmates in 2000. Prison operating and capital costs have also been growing and are projected to continue doing so in the future. For federal and state prisons, operating and capital costs cumulatively totaled about $163 billion for fiscal years 1980 through 1994. From 1980 to 1995, which was the latest year that complete data were available, the total U.S. prison inmate population under federal and state jurisdiction grew by about 242 percent, from 329,821 to 1,127,132,respectively. The corresponding average annual prison population growth rate during this period was 8.5 percent (9.9 percent for the federal population and 8.4 percent for the state populations). The prison population increased at a slower rate—6.8 percent—between 1994 and 1995 than the average growth rate. Although an August 1996 BJS report on prison and jail inmates—the source for our prison inmate population data—did not provide specific reasons for the decrease in the rate of growth, a BJS official commented that the smaller growth rate may be the result of the growing population base, currently over 1 million inmates. Nevertheless, we do not know whether this marks the start of a trend toward smaller rates of growth. As previously shown in figure 1, the federal prison inmate population grew from 24,363 in 1980 to 100,250 in 1995, which is an increase of about 312 percent. The state prison population grew from 305,458 inmates in 1980 to 1,026,882 inmates in 1995, which is an increase of about 236 percent. In California, the state prison inmate population grew from 24,569 in 1980 to 135,646 in 1995, which is an increase of about 452 percent. In Texas, the inmate population grew from 29,892 in 1980 to 127,766 in 1995, which is an increase of about 327 percent. Not all states exhibited inmate population increases to such an extent. For example, in Maine, the inmate population grew from 814 in 1980 to 1,447 in 1995, which is an increase of about 78 percent. In North Carolina, the inmate population grew from 15,513 in 1980 to 29,374 in 1995, which is an increase of about 89 percent. Corresponding to the growth in prison populations, the incarceration rates for federal and state prison inmates have also shown steady growth during the 16-year period of 1980 through 1995. As figure 2 shows, the total incarceration rate grew from 145 inmates in 1980 to 428 inmates in 1995 for every 100,000 U.S. residents, which is an increase of about 195 percent. Reflecting an even larger percentage increase (about 245 percent), the incarceration rate for federal inmates grew from 11 inmates for every 100,000 residents in 1980 to 38 inmates for every 100,000 residents in 1995. Because most prisoners are under state jurisdiction, the incarceration rate for state inmates closely follows (and, indeed, is largely determinative of) the nation’s total incarceration rate. Specifically, the incarceration rate for states grew from 134 inmates for every 100,000 residents in 1980 to 390 inmates for every 100,000 residents in 1995, which is an increase of about 191 percent. During this period, the incarceration rate in California increased 312 percent, growing from 104 inmates for every 100,000 residents in 1980 to 428 inmates for every 100,000 residents in 1995. The incarceration rate in Texas increased 222 percent, growing from 210 inmates to 677 inmates for every 100,000 residents in 1980 and 1995, respectively. According to various sources, including BJS and the U.S. Sentencing Commission, the significant growth in federal and state inmate populations since the 1980s is largely the result of changes in criminal behavior, law enforcement practices, sentencing law and policy, and release practices. For example, according to BJS, during the 1980s, an increasing number of probation and parole violators returned to prison, while in the 1990s, declining rates of release have sustained the growth in inmate populations. More specifically, regarding federal offenders, under the Sentencing Reform Act of 1984, parole was abolished, and good-time credits (time off sentence for good behavior) were limited to 54 days per year. In 1986, the Anti-Drug Abuse Act established mandatory minimum sentences for certain drug offenses. In 1988 and 1990, Congress passed additional sentencing legislation, which increased mandatory minimum sentences for drug and weapons offenses. As a result of these statutory changes, the use of probation has been reduced and the length of prison stays has increased. According to BJS data, after 1986, the average time served in federal prisons increased from 15 months to 24 months. For violent offenses, the time served increased from 50 months to 56 months, and the time served for drug offenses increased from 22 months to 33 months. Particularly noteworthy has been the trend regarding drug offenders as a percentage of the total inmate population. According to a 1991 study by the U.S. Sentencing Commission, drug offenders constituted about 91 percent of all federal defendants sentenced under mandatory minimum provisions. According to BJS, in 1993, which was the latest year that complete data were available, drug offenders constituted 26 percent of all federal and state inmates, whereas these offenders constituted 8 percent of all inmates in 1980. Also, BJS has reported that the increase in drug offenders accounts for nearly three-fourths of the total growth in federal prison inmates since 1980. The state prison inmate populations have grown as a result of, among other things, the increased number of arrests, higher probabilities of incarceration, and more severe sanctions. Specifically, according to BJS, the number of arrests increased by 41 percent between 1980 and 1993, the latest year that complete data were available. The rate of sending offenders to prison also increased. For example, the likelihood of incarceration increased 5-fold for drug violations and 4-fold for weapons offenses. According to the California Department of Corrections, the prison population in that state has grown because of court decisions, voter initiatives, and legislation, all of which have resulted in stronger law enforcement and more severe criminal sanctions. For example, a California law prohibits the use of good-time allowances to reduce the sentences of repeat offenders convicted of certain violent felonies. State corrections officials expected that the law may result in inmates’ serving additional time, which could lead to an increase in the state’s prison population in future years. While sources differed somewhat in their projected growth for federal and state prison inmate populations, they all showed substantial anticipated increases for these populations in 2000 and beyond. In June 1996, BOP projected that the federal prison population could reach about 125,000 inmates by 2000, which is a 25-percent increase over the 1995 level (see table 1). In July 1995, NCCD projected that, under sentencing policies in effect in 1994, the total inmate population for federal and state prisons could reach 1.4 million by 2000, which is an increase of 24 percent over the 1995 level. NCCD also projected that, if all states were to adopt truth-in-sentencing statutes, which would require inmates to serve at least 85 percent of their sentences, the states’ prison population could grow by an additional 190,000 inmates and total about 1.6 million inmates by 2000, which would be an increase of about 42 percent over the 1995 level. The April 1996 issue of the Corrections Compendium presented a compilation of inmate population projections that were based on a survey of federal and state corrections agencies. The combined self-reported projections showed that the federal and state prison population could reach over 1.3 million in 2000, representing an increase of 19 percent over the 1995 level. However, the survey summary in the Compendium indicated that this total may be understated. According to the summary, if the historical growth rate (8.7 percent per year from 1980 through 1994) continues in future years, the prison population could actually increase by 95 percent over the 1994 level, essentially doubling to about 2 million inmates by 2002. In July 1995, NCCD projected that the inmate population in California could reach about 210,000 by 2000, which would be an increase of 55 percent over the 1995 level. Separately, in spring 1996, the California Department of Corrections projected that the state’s prison population could reach 203,593 inmates in 2000, which would be an increase of about 50 percent over the 1995 level. For Texas, in July 1995, NCCD projected that the inmate population could reach about 149,000 by 2000, which would be an increase of about 17 percent over the 1995 level. In September 1996, the Texas Criminal Justice Policy Council projected that the state’s prison population could reach 143,748 in 2000, which would be an increase of about 13 percent over the 1995 level. Appendix II presents additional information about actual and projected federal and state prison inmate populations and incarceration rates. Prison operating costs grew steadily during fiscal years 1980 to 1994, reflecting in part the growth in prison populations. As figure 3 shows, total U.S. prison operating costs grew from about $3.1 billion in fiscal year 1980 to about $17.7 billion in current dollars in fiscal year 1994. This is an increase of 224 percent based on constant or inflation-adjusted dollars.Of this total, federal prison operating costs grew from about $319 million in fiscal year 1980 to about $1.9 billion in fiscal year 1994, which is an increase of about 242 percent based on constant dollars. The corresponding average annual growth rate during this period was 9.9 percent. State prison operating costs grew from about $2.8 billion in fiscal year 1980 to $15.8 billion in fiscal year 1994, which is an increase of 222 percent based on constant dollars. The corresponding average annual growth rate during this period was 8.7 percent. In California, operating costs grew from about $320 million in fiscal year 1980 to $2.6 billion in fiscal year 1994, which is an increase of 357 percent based on constant dollars. In Texas, operating costs grew from about $105 million in fiscal year 1980 to $1.2 billion in fiscal year 1994, which is an increase of 529 percent based on constant dollars. Prison capital costs, while growing overall, have actually fluctuated on almost a year-to-year basis during fiscal years 1980 to 1994. As figure 4 shows, total U.S. prison capital costs grew from about $538 million in fiscal year 1980 to about $2.3 billion in current dollars in fiscal year 1994. This is an increase of 141 percent based on constant or inflation-adjusted dollars. Federal prison capital costs grew from about $22 million in fiscal year 1980 to about $312 million in fiscal year 1994, representing an increase of about 715 percent (based on constant dollars). The corresponding average annual growth rate during this period was 87.9 percent. State prison capital costs grew from about $516 million in fiscal year 1980 to about $2 billion in fiscal year 1994, representing an increase of about 116 percent based on constant dollars. The corresponding average annual growth rate during this period was 7.4 percent. In California, capital costs grew from $16 million in fiscal year 1980 to $413 million in fiscal year 1994, which is an increase of 1,327 percent based on constant dollars. In Texas, capital costs grew from $20 million in fiscal year 1980 to about $577 million in fiscal year 1994, which is an increase of 1,531 percent based on constant dollars. BOP estimates federal prison operating costs to grow through fiscal 2000, while capital costs are expected to fluctuate on a year-by-year basis. Specifically, in June 1996, BOP projected that its operating costs could grow to about $3.6 billion by fiscal year 2000, almost double the level in fiscal year 1994. BOP also projected that its capital costs for new federal prisons scheduled to begin operations during fiscal years 1996 to 2006 could total about $4 billion. According to Justice officials, these cost increases were projected on the basis of historically high rates of prison population increases. According to these officials, since recent BJS statistics show that the rate of increase in prison populations from 1994 to 1995 was below the average for the preceding 5 years, the BOP cost projections for 2000 and beyond may be overestimated. NCCD has estimated that state prison population increases from 1995 to 2000 could result in total additional capital and operating costs of $32.5 billion to $37 billion for this period. Specifically, NCCD estimated that $10.6 billion to $15.1 billion could be needed to construct additional state prisons, and that $21.9 billion could be needed by the end of the decade to operate these prisons. Appendix III presents additional information about actual and projected federal and state operating and capital costs. BOP, NCCD, California, and Texas each use microsimulation models to project prison inmate populations. The models are similar in providing flexibility to adjust assumptions and data in response to new sentencing laws or policies and other criminal justice or law enforcement initiatives that could affect the size of prison populations in the respective jurisdiction. Appendix IV provides more detailed information about microsimulation and other models and methodologies used to project inmate populations. On the basis of a literature search and discussions with federal and state agency officials, we did not identify any independent assessments of the various projection models’ validity or reliability, except for BOP’s model. This model, according to BOP officials, has been subjected to various reviews. For example, the officials made the following comments: In 1993, BOP staff published a paper (which was peer reviewed) on the projection methodology. Justice’s budget staff annually reviews BOP’s inmate population projections and often reports to the Attorney General on the accuracy of the projections. Some of the forecasting organizations and state corrections agencies have tracked and self-reported on the accuracy of their respective projections. For example, according to BOP, its projections of federal prison inmate populations for 1991 to 1995 were within 1.4 percent (on average) of the actual populations. Also, according to NCCD, its projections for 1991 through 1994 were within 2 percent (on average) of the actual populations. However, BOP officials and a modeling expert said that assessing a model’s reliability by comparing projections with actual populations is not necessarily the only approach. For instance, the officials noted that after projections showing potential impacts are presented or published, legislators or administrators are more likely to modify or change certain policies or practices, taking the projections into consideration. Thus, according to these officials, another benefit of a population simulation is to inform the public policy debate. The April 1996 issue of the Corrections Compendium presented the results of a survey that asked respondents to report on the accuracy of their models’ population projections. The survey was originally sent to federal and state corrections agencies in October 1995, and the responses with the applicable data were collected through February 1996. Of the 39 respondents to this question, 54 percent reported that their past projections were “accurate,” 23 percent reported that their past projections were “too low,” and 8 percent said their past projections were “too high.” The other respondents to the overall survey reported that they either did not project populations or did not assess the accuracy of their projections. We obtained comments on the draft of this report from Justice officials, including the Director of Justice’s Audit Liaison Office, BOP’s Chiefs of Research and Evaluation and Budget Development, and BJS’ Chief of Corrections Statistics. These officials generally agreed with the contents of the draft report. However, BOP and BJS officials provided technical comments and clarifications related to certain numerical data in the report. Also, the BOP officials provided revised federal prison inmate data. We have incorporated these technical comments, clarifications, and revisions where appropriate in this report. Regarding prison costs, BJS officials expressed the view that actual expenditure data compiled and published by the Census Bureau (e.g., Census of Government Finances) would be more accurate and complete than data from The Corrections Yearbook, the source we used for the draft of this report. Accordingly, from the Census Bureau, we obtained state prison expenditure data for fiscal years 1980 through 1994 (the latest year that complete data were available), and we incorporated this information and revised the related analyses in this report where appropriate. In our draft report, we noted that we did not identify any independent assessments regarding the validity or reliability of the various models used to project federal and state prison inmate populations. However, in commenting on the draft report, BOP officials called to our attention examples of various reviews or evaluations that could be considered assessments of the Bureau’s microsimulation model. We incorporated BOP’s comments and examples in this report. We also obtained comments on the draft of this report from NCCD’s Executive Vice President and a NCCD Senior Researcher. These officials agreed with the contents of this report and stated that it factually represented information and statistical data developed by and previously published by NCCD. The officials also offered one technical clarification, which we have incorporated in this report. We are providing copies of this report to the Attorney General; the Assistant Attorney General, the Office of Justice Programs; the Director, BOP; and other interested parties. Copies also will be made available to others upon request. The major contributors to this report are listed in appendix V. Please contact me on (202) 512-8777 if you or your staff have any questions. We initiated this review to identify (1) the trends in federal and state prison inmate populations and operating and capital costs since 1980, including projections for 2000 and beyond and the reasons for these trendsand (2) the models and methodologies used by federal and statecorrections agencies and nongovernmental forecasting organizations to make these projections, including whether any validity or reliability assessments had been done. To address these objectives, we initially conducted a literature search to identify available data sources and to determine to what extent these issues had received congressional attention. In the latter regard, we noted that a Subcommittee of the House Committee on the Judiciary held hearings in 1993 that were useful in our analyses. More specifically, to identify the trends in prison populations and costs, we contacted relevant federal agencies, such as the Bureau of Justice Statistics (BJS), the Federal Bureau of Prisons (BOP), and the U.S. Bureau of the Census, and corrections agencies in the two states with the largest prison populations (California and Texas). BJS compiles and publishes considerable statistical information covering both federal and state correctional systems. For example, two relevant series of BJS publications are the Sourcebook of Criminal Justice Statistics and Correctional Populations in the United States. The Census Bureau compiles and publishes, among other things, statistical information about federal and state government expenditures. For example, relevant series of Census Bureau publications are the Census of Government Finances and State Government Finances. From BOP, we obtained historical as well as projected data covering both populations and costs for federal prisons. From the state agencies, we obtained and reviewed historical and projected prison inmate population data. State agency officials told us that prison operating and capital costs generally are not projected beyond the next fiscal year. Furthermore, in identifying prison population and cost trends, we also contacted nongovernmental sources, such as the National Council on Crime and Delinquency (NCCD). As a private organization engaged in research, training, and advocacy programs to reduce crime and delinquency, NCCD has published several studies of prison-related topics, including projections of inmate populations. Also, another useful nongovernmental source was the Corrections Compendium, which is a journal from CEGA Publishing. We discussed the population and cost data we obtained with cognizant officials at the federal and state agencies and the nongovernmental organizations. We did not independently verify the accuracy and quality of the data we obtained. To identify the models and methodologies used by federal and state corrections agencies and nongovernmental organizations to make projections, we obtained and reviewed modeling and methodology information from BOP, NCCD, the Corrections Compendium, the California Department of Corrections, and the Texas Criminal Justice Policy Council.We focused our review on BOP’s Federal Sentencing Simulation model, NCCD’s Prophet model (used by 23 states in addition to NCCD), and Texas’ JUSTICE model (Texas has the second largest prison inmate population). To identify the extent, if any, to which the forecasting models and methodologies had been assessed for validity and reliability, we conducted a literature search. Also, we interviewed officials in BOP’s Office of Research and Evaluation, which is responsible for, among other things, forecasting federal inmate populations. Similarly, we interviewed state corrections agency officials in California and Texas. We discussed issues related to the models and methodologies and their validity and reliability with cognizant officials from BOP and NCCD and the author of a 1990 BJS-sponsored study that reviewed (but did not evaluate) some of the projection models used by federal and state criminal justice systems. Federal and state prison inmate populations—and corresponding incarceration rates—have been growing since 1980. Federal and state corrections agencies and nongovernmental forecasting organizations project that these populations will continue to grow through 2000 and beyond. Populations in the other three correctional categories—probation, parole, and jail—have also grown since 1980. However, in terms of the percentages of the overall adult correctional population, the relative distribution of adult offenders among the four categories were similar in 1994 and 1980. Table II.1 shows that the federal prison inmate population and the corresponding incarceration rate have grown consistently from 1980 to 1995. By 1995, the prison population had grown 4-fold from the 1980 level, reaching over 100,000 inmates. The incarceration rate had grown more than 3-fold, reaching 38 inmates for every 100,000 U.S. residents in 1995. Table II.2 shows the federal prison inmate population at fiscal year-end. According to BOP, the population data presented are different than BJS’ data (presented in table II.1) in that, in addition to being compiled by fiscal year rather than calendar year, they represent inmates both in BOP facilities and alternative confinements, such as contract facilities. As table II.3 shows, from 1980 to 1995, the state prison inmate population and the corresponding incarceration rate grew by about 236 and about 191 percent, respectively. In 1995, the state prison inmate population reached just over 1 million, compared with just over 300,000 in 1980. The incarceration rate reached 390 inmates for every 100,000 U.S. residents in 1995, compared with 134 in 1980. From 1980 to 1995, the prison populations in California and Texas grew by well over 400 percent and 300 percent, respectively. Table II.4 shows that the federal prison inmate population is projected by BOP to continue growing, reaching over 125,000 inmates in 2000 and over 138,000 inmates in 2006. These projected populations represent increases of about 25 and about 38 percent, respectively, over the 1995 level. Table II.5 shows NCCD’s prison population projections through 2000 for the 21 states that use its Prophet population projection model, California (which uses a similar model), and Texas, which provided its own projections to NCCD. Using data for these 23 states, and assuming that the sentencing policies in effect in 1994 would continue, NCCD estimated that the federal and state prison inmate population could reach 1.4 million in 2000. As table II.6 shows, the populations in all four adult correctional categories—prison, probation, parole, and jail—have increased between 1980 and 1995. The total federal and state prison inmate population in custody grew by about 237 percent from 1980 to 1995. In comparison, during the same period, the probation population grew by 176 percent, the parole population grew by 218 percent, and the jail population grew by 178 percent. Overall, the total adult correctional population grew by 192 percent, from 1.8 million in 1980 to about 5.4 million in 1995. During this time, the U.S. adult population grew by about 19 percent, from 163.5 million to about 194.0 million. Accordingly, the adult correctional population represented 2.8 percent of the total adult population in 1995, well over double the 1.1-percent level in 1980. Figure II.1 shows that the populations in the four adult correctional categories as a percentage of the total adult correctional population were essentially unchanged between 1995 and 1980. Specifically, in 1995, the prison inmate population represented 20 percent of the total adult correctional population, compared with 17 percent in 1980. Also, in 1995, the probation population represented 57 percent (61 percent in 1980), the parole population 13 percent (12 percent in 1980), and the jail population 9 percent (10 percent in 1980) of the total adult correctional population. Table III.1 shows that federal and state prison annual operating costs have grown significantly (a combined 224 percent increase in inflation-adjusted terms) since fiscal year 1980. These costs cumulatively totaled about $137.7 billion in current dollars for fiscal years 1980 through 1994. Note 1: Dollar figures represent actual dollars (no adjustment for inflation). Note 2: According to BOP, the federal cost data presented are actual obligations, adjusted for equipment and other capital item costs. As table III.2 shows, federal and state prison capital costs have also grown significantly from fiscal year 1980 to 1994. Total capital costs reached about $2.3 billion in fiscal year 1994, an inflation-adjusted increase of about 141 percent over the level in fiscal year 1980. Federal and state capital costs cumulatively totaled about $25.4 billion for fiscal years 1980 through 1994. Note 1: Dollar figures represent actual dollars (no adjustment for inflation). Note 2: According to BOP, the federal cost data presented are actual obligations, adjusted for equipment and other capital item costs. Table III.3 shows that federal and state prison operating and capital costs cumulatively totaled about $163.1 billion in current dollars for fiscal years 1980 through 1994. Federal costs totaled about $17.7 billion, while state costs totaled about $145.5 billion during this period. Operating costs totaled about $137.7 billion, while capital costs totaled about $25.4 billion. Table III.4 shows BOP’s projections for federal prison operating and capital costs through fiscal year 2006. As shown, BOP projects that operating costs in fiscal year 2006 could be almost double the 1996 level. The projections also show that capital costs are expected to fluctuate on a year-by-year basis. Various types of models and methodologies are used to project prison inmate populations, but microsimulation is the model type most widely used by federal and state corrections agencies. As used by BOP and 27 states, including California and Texas, microsimulation modeling can project prison populations by simulating a wide range of legislative, policy, or administrative changes that affect the criminal justice system. Other states use flow models or statistical methods to project populations. Except for BOP’s projection model, we did not identify any independent assessments of the validity or reliability of the various projection models. However, self-reported data indicated that the models have been accurate. Microsimulation models replicate the flow of persons through the criminal justice system, incorporating considerable detail from the actual records of convicted offenders. As table IV.1 shows, microsimulation modeling is used by BOP and 27 states. In 1987, BOP and the U.S. Sentencing Commission jointly developed the Federal Sentencing Simulation Model (FEDSIM) to comply with a series of congressional initiatives that required an impact analysis of federal sentencing guidelines. In January 1995, BOP began using a revised model (FEDSIM-2), which incorporates different data sets based upon experience under federal sentencing guidelines. The NCCD Prophet model is based on a model that the California Department of Corrections has used since 1976. The Texas Criminal Justice Policy Council developed the JUSTICE microsimulation model in 1987. Each of these three models is discussed separately in the following sections. User (BOP or state) Statistical methods (various) Other (proprietary) Legend: N/A equals not applicable. Using convicted offenders’ cases, two data sets are used when FEDSIM-2 is updated annually: (1) the total prison population at the end of the prior fiscal year and (2) all inmates admitted into federal prisons during the prior fiscal year. In this model, prospective release dates for individuals in both groups are recorded, and sentencing time is distributed into monthly groupings or “trace elements” to track the total time served for each prisoner. FEDSIM-2 tracks convicted drug offenders, along with 20 other types of offenders, to determine the overall trend in the federal prison population. The Prophet model, which NCCD has customized to accommodate states’ correctional information systems, can predict future population levels, isolate the effects of specific practices, and predict the effects of proposed policy changes. This model is conceptually designed around the movement of offenders into, through, and out of the prison and parole systems. As shown in table IV.1, 23 states (including California) use a form of this model. The Prophet model simulates offender subgroup compositions and lengths of stays within each stage of the correctional system. Individual cases are then processed through a series of probability distribution arrays or matrices, which allows the model to compute prison populations. Using the model, the total correctional population can be separated into subgroups, and forecasts for each subgroup can be made on the basis of the proposed policy changes, without altering the status of the other subgroups. Prophet requires five data sets to operate—prison admissions, prison exits, current prison population, current parole population, and parole exits. Texas’ JUSTICE microsimulation model uses convicted felony offenders’ records from the state’s jail, prison, and parole populations. On a monthly basis, these data are loaded into or updated in the model, which has two parts. One part covers prisoners coming into the correctional system, and the second part covers the policies that determine movement within the system. Projections are made from the first part, and impact analyses of proposed policy changes are made from the second part. JUSTICE creates future offenders’ records by duplicating key characteristics (e.g., offense and sentence) of the current admissions and parolees and assessing the probability of these characteristics being present in future admissions. The model accounts for the specific months that offenders enter the different stages of the system and projects a total number of adult felony arrests on the basis of the at-risk population—i.e., that portion of the Texas population (aged 18 to 44 years old) considered most likely to engage in criminal activity. Each offender’s key characteristics determine the flow of the offender through the system by triggering certain criteria (e.g., parole eligibility) that affect the time and direction of the offender’s movement through the system. The first part of the JUSTICE model is used to make projections of those most likely to be sent to prison or placed on probation. The second part of the model permits simulating the impact of proposed changes affecting the size of the probation, prison, and parole populations. Texas’ JUSTICE model has considerable flexibility in simulating changes in the major “rules of movement” through the state’s correctional system. For example, 29 parameters can be interactively altered to assess the impact of proposed policy changes. According to experts in the prison modeling field, there are no standard criteria for assessing or validating the reliability of microsimulation models used to project prison populations. The NCCD and state agency officials we contacted said that microsimulation models are generally considered reliable if the projections come within 2 percent of the actual populations. These officials also commented that projections beyond 5 years, and perhaps even beyond 2 years, are usually considered rough estimates. Notwithstanding that comparing projections with actual prison populations may be an insufficient gauge of a model’s reliability, on the basis of self-reported assessments, the three major models we identified (FEDSIM-2, NCCD Prophet, and JUSTICE) are reported to produce accurate projections. For instance, the April 1996 issue of the Corrections Compendium presented the results of a survey about prison population projections. In responding to a question in the survey related to projection accuracy, BOP, California, and Texas reported that their respective projections were accurate. As shown in table IV.1, in contrast to the microsimulation models used by BOP and most states, other states use flow models and statistical methods to project prison populations. Flow models separate the characteristics of the various groups or cohorts of prisoners moving through the system from the aggregate population for analysis. These models track the offenders through the criminal justice system by calculating percentages (or branching ratios) of the offender population that continue through each stage of the system. For example, of every 100 arrests, perhaps only 30 percent of the individuals will be indicted; and, of the indictments, perhaps only 50 percent will be convicted; etc. In other words, flow models represent continuation into the next stage, with branching ratios used to “prune” out those offenders who will not become part of the prison inmate population. Also shown in table IV.1, 10 states use statistical methods, such as regression analysis and time series analysis, to project prison populations. Statistical methods all use data from past patterns to project future inmate populations. Regression analysis, for example, is a statistical technique based on equations that functionally relate one or more independent variables, with coefficients determined from previous analysis, to a dependent variable. Statistical methods tend to be nonpolicy sensitive and, therefore, are not particularly useful for impact analyses. However, reasons for changes can be deduced retrospectively from these statistical methods. Finally, as shown in table IV.1, three states use models or methodologies that are not classifiable either as microsimulation, flow models, or statistical methods. For example, one state projects its future population by extrapolating the previous 5-year growth trend in the existing population. Danny R. Burton, Assistant Director, Administration of Justice Issues Mary K. Muse, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the trends in U.S. prison inmate populations and operating and capital costs since 1980, including projections for 2000 and beyond and the reasons for the trends and the models and methodologies used by federal and state corrections agencies and nongovernmental forecasting organizations to make these projections. GAO found that: (1) the total U.S. prison population grew from about 329,800 inmates in 1980 to about 1.1 million inmates in 1995, which is an increase of about 242 percent; (2) during this period, the federal inmate population grew about 311 percent, and the inmate populations under the jurisdiction of state prisons grew about 237 percent; (3) the corresponding average annual growth rates were 9.9 percent of federal populations and 8.4 percent for state populations; (4) in June 1996, the Bureau of Prisons (BOP) projected that the federal prison population could reach about 125,000 inmates by 2000, an increase of 25 percent over the 1995 level; (5) in July 1995, the National Council on Crime and Delinquency (NCCD) projected that the total federal and state prison population under sentencing policies in effect in 1994 could reach 1.4 million inmates by 2000, representing an increase of about 24 percent over the 1995 level; (6) in recent years, inmate population growth can be traced in large part to major legislative initiatives that are intended to get tough on crime, particularly on drug offenders; (7) U.S. prison annual operating costs grew from about $3.1 billion in fiscal year (FY) 1980 to about $17.7 billion in current dollars in FY 1994; (8) BOP projected that its capital costs for new federal prisons scheduled to begin operations during fiscal years 1996 to 2006 could total about $4 billion; (9) BOP, NCCD, California, and Texas each use a form of microsimulation modeling to forecast prison inmate populations; and (10) according to BOP, its projections of federal prison inmate populations for 1991 to 1995 were within 1.4 percent, on average, of the actual populations.
The four federal programs we examined were established from 1969 through 2000 for various purposes. The Black Lung Program was established in 1969 as a temporary federal program to provide benefits to coal miners disabled because of pneumoconiosis (black lung disease), and their dependents, until adequate state programs could be established. It has been amended several times, effectively restructuring all major aspects of the program and making it an ongoing federal program. VICP was authorized in 1986 to provide compensation to individuals for vaccine-related injury or death. According to the Department of Health and Human Services (HHS), the agency that administers the program, it was established to help stabilize manufacturers’ costs and ensure an adequate supply of vaccines. Concerns expressed by various groups contributed to the program’s establishment, including concerns from parents about harmful side effects of certain vaccines, from vaccine producers and health care providers about liability, and from the public about shortages of vaccines. RECP was established in 1990 to make partial restitution to on-site participants, uranium miners, and nearby populations who (1) were exposed to radiation from atmospheric nuclear testing or as a result of their employment in the uranium mining industry and (2) developed certain related illnesses. EEOICP was established in 2000 to provide payments to nuclear weapons plant workers injured from exposure to radiation or toxic substances, or their survivors. Initially, some qualifying workers were paid federal benefits and others were provided assistance in obtaining benefits from state workers’ compensation programs. In 2004, the federal government assumed total responsibility for benefits paid under the program. The purpose of the four federal compensation programs we examined is similar in that they all were designed to compensate individuals injured by exposure to harmful substances. However, how the programs are structured varies significantly, including who administers the program, how they are funded, the benefits provided, and who is eligible for benefits. For example: Several federal agencies are responsible for the administration of the programs: the Department of Labor (DOL) administers the Black Lung Program and EEOICP; the Department of Justice (DOJ) administers RECP and shares administration of VICP with HHS and the Court of Federal Claims. In addition, the National Institute for Occupational Safety and Health and DOJ provide support to DOL in administering EEOICP. Responsibility for administering two of the programs has changed since their inception. Specifically, claims for the Black Lung Program were initially processed and paid by the Social Security Administration but, as designed, DOL began processing claims in 1973 and took over all Black Lung Program claims processing in 1997. In 2002, the Congress officially transferred all legal responsibility and funding for the program to DOL. In addition, administration of EEOICP was initially shared between the Departments of Energy and DOL but, in 2004, DOL was given full responsibility for administering the program and paying benefits. Funding of the four programs varies. Although initially funded through annual appropriations, the Black Lung Program is now funded by a trust fund established in 1978 that is financed by an excise tax on coal and supplemented with additional funds. The tax, however, has not been adequate to fund the program; at the time of our review, the fund had borrowed over $8.7 billion from the federal treasury. For the VICP, claims involving vaccines administered before October 1, 1988, were paid with funds appropriated annually through fiscal year 1999. Claims involving vaccines administered on or after October 1, 1988, are paid from a trust fund financed by a per dose excise tax on each vaccine. For example, the excise tax on the measles, mumps, and rubella vaccine at the time of our review was $2.25. EEOICP and RECP are completely federally funded. Although RECP was initially funded through an annual appropriation, in 2002 the Congress made funding for RECP mandatory and provided $655 million for fiscal years 2002 through 2011. EEOICP is funded through annual appropriations. Benefits also vary among the four programs. Some of the benefits they provide include lump sum compensation payments and payments for lost wages, medical and rehabilitation costs, and attorney’s fees. For example, at the time of our review, when claims were approved, VICP paid medical and related costs, lost earnings, legal expenses, and up to $250,000 for pain and suffering for claims involving injuries, and up to $250,000 for the deceased’s estate, plus legal expenses, for claims involving death. The Black Lung Program, in contrast, provided diagnostic testing for miners; monthly payments based on the federal salary scale for eligible miners or their survivors; medical treatment for eligible miners; and, in some cases, payment of claimants’ attorney fees. The groups who are eligible for benefits under the four federal programs and the proof of eligibility required for each program vary widely. The Black Lung Program covers coal miners who show that they developed black lung disease and are totally disabled as a result of their employment in coal mines, and their survivors. Claimants must show that the miner has or had black lung disease, the disease arose out of coal mine employment, and the disease is totally disabling or caused the miner’s death. VICP covers individuals who show that they were injured by certain vaccines and claimants must show, among other things, that they received a qualifying vaccine. RECP covers some workers in the uranium mining industry and others exposed to radiation during the government’s atmospheric nuclear testing who developed certain diseases. Claimants must show that they were physically present in certain geographic locations during specified time periods or that they participated on-site during an atmospheric nuclear detonation and developed certain medical conditions. Finally, EEOICP covers workers in nuclear weapons facilities during specified time periods who developed specific diseases. At the time of our review, total benefits paid for two of the programs—the Black Lung Program and RECP—significantly exceeded their initial estimates. An initial cost estimate was not available for VICP. The initial estimate of benefits for the Black Lung Program developed in 1969 was about $3 billion. Actual benefits paid through 1976—the date when the program was initially to have ended—totaled over $4.5 billion and benefits paid through fiscal year 2004 totaled over $41 billion. For RECP, the costs of benefits paid through fiscal year 2004 exceeded the initial estimate by about $247 million. Table 1 shows the initial program estimates and actual costs of benefits paid through fiscal year 2004 for the four programs. Actual costs for the Black Lung Program have significantly exceeded the initial estimate for several reasons, including (1) the program was initially set up to end in 1976 when state workers’ compensation programs were to have provided these benefits to coal miners and their dependents, and (2) the program has been expanded several times to increase benefits and add categories of claimants. The reasons the actual costs of RECP have exceeded the initial estimate include the fact that the original program was expanded to provide benefits to additional categories of claimants, including uranium miners who worked above ground, ore transporters, and mill workers. Although the costs of EEOICP benefits paid through fiscal year 2004 were close to the initial estimate, these costs were expected to rise substantially because of changes that were not anticipated at the time the estimate was developed. For example, payments that were originally supposed to have been made by state workers’ compensation programs are now paid by the federal government. In addition, at the time of our review, a large proportion of the claims filed (45 percent) had not been finalized. At the time of our review, the annual administrative costs of the four programs varied. For fiscal year 2004, they ranged from approximately $3.0 million for RECP to about $89.5 million for EEOICP (see table 2). The number of claims filed for the three programs for which initial estimates were available significantly exceeded the initial estimates and the structure of the programs, including the approval process and the extent to which the programs allow claimants and payers to appeal claims decisions in the courts, affected the amount of time it took to finalize claims and compensate eligible claimants. The number of claims filed through fiscal year 2004 ranged from about 10,900 for VICP to about 960,800 for the Black Lung Program. The agencies responsible for processing claims have, at various times, taken years to finalize some claims, resulting in some claimants waiting a long time to obtain compensation. Table 3 shows the initial estimates of the number of claims anticipated and the actual number of claims filed for each program through fiscal year 2004. Factors that affected the amount of time it took the agencies to finalize claims include statutory and regulatory requirements for determining eligibility, changes in eligibility criteria that increase the volume of claims, the agency’s level of experience in handling compensation claims, and the availability of funding. For example, in fiscal year 2000, when funds appropriated for RECP were not sufficient to pay all approved claims, DOJ ceased making payments until the following fiscal year when funds became available. The approval process and the extent to which programs allow claimants and payers to appeal claims decisions also affected the time it took to process claims. For example, it can take years to approve some EEOICP claims because of the lengthy process required for one of the agencies involved in the approval process to determine the levels of radiation to which claimants were exposed. In addition, claims for benefits provided by programs in which the claims can be appealed can take a long time to finalize. For example, claimants whose Black Lung Program claims are denied may appeal their claims in the courts. At the time of our review, a Department of Labor official told us that it took about 9 months to make an initial decision on a claim and at least 3 years to finalize claims that were appealed. The federal government has played an important and growing role in providing benefits to individuals injured by exposure to harmful substances. All four programs we reviewed have been expanded to provide eligibility to additional categories of claimants, cover more medical conditions, or provide additional benefits. As the programs changed and grew, so did their costs. Initial estimates for these programs were difficult to make for various reasons, including the difficulty of anticipating how they would change over time and likely increases in costs such as medical expenses. Decisions about how to structure compensation programs are critical because they ultimately affect the costs of the programs and how quickly and fairly claims are processed and paid. This concludes my prepared statement. I would be pleased to respond to any questions that you or the Members of the Subcommittees may have. For further information, please contact Anne-Marie Lasowski at (202) 512- 7215. Individuals making key contributions to this testimony include Revae Moran, Cady Panetta, Lise Levie, and Roger Thomas. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. federal government has played an ever-increasing role in providing benefits to individuals injured as a result of exposure to harmful substances. Over the years, it has established several key compensation programs, including the Black Lung Program, the Vaccine Injury Compensation Program (VICP), the Radiation Exposure Compensation Program (RECP), and the Energy Employees Occupational Illness Compensation Program (EEOICP), which GAO has reviewed in prior work. Most recently, the Congress introduced legislation to expand the benefits provided by the September 11th Victim Compensation Fund of 2001. As these changes are considered, observations about other federal compensation programs may be useful. In that context, GAO's testimony today will focus on four federal compensation programs, including (1) the structure of the programs; (2) the cost of the programs through fiscal year 2004, including initial cost estimates and the actual costs of benefits paid, and administrative costs; and (3) the number of claims filed and factors that affect the length of time it takes to finalize claims and compensate eligible claimants. To address these issues, GAO relied on its 2005 report on four federal compensation programs. As part of that work, GAO did not review the September 11th Victim Compensation Fund of 2001. The four federal compensation programs GAO reviewed in 2005 were designed to compensate individuals injured by exposure to harmful substances. However, the structure of these programs differs significantly in key areas such as the agencies that administer them, their funding, benefits paid, and eligibility. For example, although initially funded through annual appropriations, the Black Lung Program is now funded by a trust fund established in 1978 financed by an excise tax on coal and supplemented with additional funds. In contrast, EEOICP and RECP are completely federally funded. Since the inception of the programs, the federal government's role has increased and all four programs have been expanded to provide eligibility to additional categories of claimants, cover more medical conditions, or provide additional benefits. As the federal role for these four programs has grown and eligibility has expanded, so have the costs. Total benefits paid through fiscal year 2004 for two of the programs--the Black Lung Program and RECP--significantly exceeded their initial estimates for various reasons. The initial estimate of benefits for the Black Lung Program developed in 1969 was about $3 billion. Actual benefits paid through 1976--the date when the program was initially to have ended--totaled over $4.5 billion and, benefits paid through fiscal year 2004 totaled over $41 billion. Actual costs for the Black Lung Program significantly exceeded the initial estimate for several reasons, including (1) the program was initially set up to end in 1976 when state workers' compensation programs were to have provided these benefits to coal miners and their dependents, and (2) the program has been expanded several times to increase benefits and add categories of claimants. For RECP, the costs of benefits paid through fiscal year 2004 exceeded the initial estimate by about $247 million, in part because the original program was expanded to include additional categories of claimants. In addition, the annual administrative costs for the programs varied, from approximately $3.0 million for RECP to about $89.5 million for EEOICP for fiscal year 2004. Finally, the number of claims filed for three of the programs significantly exceeded the initial estimates, and the structure of the programs affected the length of time it took to finalize claims and compensate eligible claimants. For the three programs for which initial estimates were available, the number of claims filed significantly exceeded the initial estimates. In addition, the way the programs were structured, including the approval process and the extent to which the programs allow claimants and payers to appeal claims decisions in the courts, affected how long it took to finalize the claims. Some of the claims have taken years to finalize. For example, it can take years to approve some EEOICP claims because of the lengthy process required for one of the agencies involved in the approval process to determine the levels of radiation to which claimants were exposed. In addition, claims for benefits provided by programs in which the claims can be appealed can take a long time to finalize.
The Niagara Falls Bridge Commission (Commission) owns and operates three of the four international bridges across the Niagara River that link the roadways of New York State and the Province of Ontario in Canada. The Commission is administered by a board of eight commissioners, four of whom are appointed by the Governor of New York and the other four by the Premier of the Province of Ontario. In 1990, the Commission adopted a plan for a long-term capital program to make improvements at its bridges and relieve delays and traffic congestion at them. The U.S. Congress created the Commission by a joint resolution in 1938 “to construct, maintain, and operate” a single international bridge across the Niagara River. The Commission was given authority over two additional bridges in congressional amendments in 1946, 1949, and 1953 and now manages the Rainbow, Whirlpool Rapids, and Lewiston-Queenston bridges. (Fig. 1.1 shows the location of these three bridges.) The joint resolution also established relationships between the Commission and the U.S. federal, New York State, and Canadian governments. The resolution has no counterpart in Canadian or New York State legislation. The resolution’s provisions are to be enforced by either the New York State Attorney General, the appropriate U.S. district attorney, or the Solicitor General of Canada. The resolution also provides for the eventual conveyance of the bridges under the Commission’s control to the state of New York and Canada. The Congress provided authority for the Commission to operate and finance bridge operations. The Commission is administered by a board of eight commissioners. Under the terms of the joint resolution, the state of New York has authority to appoint four commissioners, and Canada—through its designee, the Premier of the Province of Ontario—appoints the other four. The Commission employs about 110 staff, of which 86 are toll collectors and maintenance staff, while the remainder are administrative staff, including the general manager. In addition to the Commission’s employees, Canadian and U.S. customs and immigration officials also work on Commission properties. The Commission derives its revenues from bridge operations. The resolution gives the Commission the authority to fix and charge tolls for transit over the bridges and to use these funds to maintain, repair, and operate the bridges. The Commission’s audited financial statements for the fiscal year ending October 31, 1993, show revenues of $24.3 million, consisting of tolls ($10.6 million), interest ($7.2 million), rent from leasing space in its buildings ($5.5 million), and other sources. Expenses were $17.6 million, including $7.4 million in interest, $5.7 million in salary and fringe benefits, and $4.5 million in other expenses. The Commission does not receive any revenue or appropriations from U.S. federal, state, or local governments or from the Canadian provincial government. The Commission does, however, receive rent for space used by the U.S. Customs Service and the Immigration and Naturalization Service for inspecting people and goods entering the United States. Although the Commission does not receive direct appropriations, it does benefit from the ability to issue tax-exempt bonds. The joint resolution further states that any liability or obligation incurred by the Commission is to be paid solely from the funds provided for under the joint resolution; no resulting indebtedness is to be considered an indebtedness of the United States. The joint resolution also gives the Commission the authority to issue bonds to help pay for the cost of the bridges and other necessary expenses. The Intermodal Surface Transportation Efficiency Act of 1991 states that “the Commission shall be deemed for purposes of all Federal law to be a public agency or public authority of the State of New York, notwithstanding any other provision of law.” The essence of this provision is that interest on any bonds that the Commission issued after 1991 could be considered exempt from federal income tax. Once the bonds issued for the bridges and the related interest are paid off, the bridges are to be conveyed to the state of New York and Canada. Subsequently, the Commission is to be dissolved by order of the State Comptroller of New York. In the late 1980s, the Commission was faced with several concerns related to its bridge crossings. At peak periods during the summers and on weekends, severe traffic congestion on the Commission’s bridges resulted in long lines of cars and significant delays. The delays were not caused by insufficient bridge capacity, but rather by the time required for customs and immigration inspections at international border crossings. In response to projections of further traffic increases, the Commission undertook a long-term capital improvement program entitled A Thirty-Year Plan (the Plan). In addition, officials from the U.S. Customs Service and the Immigration and Naturalization Service had said that they needed improved working facilities because the space that they leased from the Commission was antiquated, unsafe, and insufficient. This Plan, published in September 1990, set forth capital projects to meet projected traffic needs through the year 2020. The Plan called for expanding the capacity for collecting tolls and conducting inspections and for improving working facilities at the Commission’s three bridges. These projects, estimated in 1990 to cost $122 million, were to be financed through long-term bonds and accumulated Commission revenues. The Plan also called for the possible future construction of a new bridge to handle traffic volumes expected to exceed the capacity of the existing bridges. The Commission updated its cost and construction schedules when it issued bonds to fund work on the existing bridges in 1992. (See table 1.1 for an overview of the proposed bridge projects and their updated costs and schedules.) Located within sight of Niagara Falls, the Rainbow Bridge is reportedly the second busiest point of entry to the United States, serving a mix of tourist and local traffic. Rainbow Bridge carries vehicles, as well as the greatest number of pedestrians of the Commission’s three bridges. (See table 1.2 for traffic statistics for the Commission’s bridges.) Currently, this bridge has four toll booths in New York and five in Canada, in addition to eight primary inspection lanes in New York and eight inspection lanes in Canada. The new construction will expand the bridge plaza to provide 6 toll booths and 20 primary inspection lanes on the U.S. side. The construction will provide for one-way tolls, after which toll booths will no longer be needed in Canada. Canada will have fewer inspection lanes—16—because it processes cars faster. Improvements planned for the Rainbow Bridge also include major updating and expansion of the bridge plaza’s facilities to include new buildings for customs and immigration operations and for bridge maintenance, as well as a duty-free store. The new operations building was originally designed as a three-story structure sheathed in reflective glass, rising 75 feet over the road surface and arching 600 feet across the bridge apron. The design would require the use of about half an acre of Niagara Reservation State Park land adjacent to the bridge plaza. To prevent any overall loss of parkland, the Commission proposes exchanging two land parcels—an unneeded portion of its own easement on the south side of the Rainbow Bridge plaza and land near the Whirlpool Rapids Bridge—for the needed parkland on the north side of the Rainbow Bridge. The Whirlpool Rapids Bridge is an 1897 structure located 1.4 miles north of the Rainbow Bridge. The Whirlpool Rapids Bridge currently has two primary inspection lanes in the U.S. and three in Canada and two toll booths in Canada. The bridge has two levels, serving vehicles and pedestrians on the lower level and trains on the upper level. The bridge does not currently carry large commercial trucks because of its design and limited customs inspection facilities. The Commission planned to upgrade the bridge’s upper level to accommodate vehicles including trucks, as well as trains; to design and construct highway approaches and bridge plazas; and to relocate railroad tracks. The Commission also planned to install four one-way toll booths and to expand the number of inspection lanes to allow four inspection booths in each direction. The additional vehicular lanes on the upper level and access for commercial trucks were intended to relieve pressure on the Lewiston-Queenston Bridge, currently one of only two commercial routes across the Niagara River. The most modern of the Commission’s three bridges, the Lewiston-Queenston Bridge, was opened on June 28, 1963, and is about 7 miles north of Niagara Falls. The only one of the Commission’s bridges that can accommodate all types of commercial vehicles, the Lewiston-Queenston Bridge directly connects the New York State Thruway with Canadian highways to Toronto. The only other commercial route across the Niagara River is the Peace Bridge in Buffalo, New York, which is operated by the Buffalo-Ft. Erie Bridge Authority. Currently, the Lewiston-Queenston Bridge has eight one-way toll booths located in Canada—four for cars and four for trucks or cars. This bridge’s plazas currently house eight primary car inspection lanes and three primary truck inspection lanes in each direction. The Commission has considered adding two toll booths for cars and four additional primary inspection lanes on each side of the border for automobiles. If traffic is sufficiently heavy, the Commission’s 30-year Plan also proposed the construction of a new four-lane international bridge 200 feet north of the existing Whirlpool Rapids Bridge. To serve the new bridge, terminal facilities slated to be constructed earlier at the nearby Whirlpool Rapids Bridge would be enlarged and approach roadways widened and extended to connect major U.S. and Canadian highways with the new bridge. The old Whirlpool Rapids Bridge would continue to serve trains and small tourist buses. Because the operations of the Commission have not been reviewed by a governmental entity during its more than 50 years of existence, Representative LaFalce asked GAO and the New York State Office of the State Comptroller (OSC) to review its operations. Specifically, our objectives were to review the Commission’s efforts to finance and administer its capital program and the Commission’s internal controls used to ensure that its business affairs were appropriately conducted. In the course of our work, we also reviewed responsibility for governmental oversight of the Commission. To determine how the Commission financed and administered its capital program, we met with Commission officials and consultants to discuss their capital program. We also reviewed relevant Commission records. In addition, we met with officials from U.S. federal, New York State, and Canadian agencies affected by the capital program, federal agencies that oversee tax-exempt bonds, and individuals with expertise in municipal bond financing. To assess the Commission’s internal controls, we examined Commission records on selected administrative operations and discussed them with cognizant Commission staff. Specifically, we reviewed policies and practices on the procurement of goods and services and payments and reimbursements to commissioners. We also performed preliminary reviews of the Commission’s investment functions and found no weaknesses in this area. We planned to rely on tests of the Commission’s internal controls performed by the independent audit firm that conducted the Commission’s most recent annual financial statement audit. To review responsibility for governmental oversight of the Commission, we reviewed pertinent federal and state legislation and identified the relationship between federal and state governments and the Commission. We also determined what financial and other reviews of the Commission have been performed since its creation in 1938. We conducted our work between May and October 1994. We experienced a number of impairments to the scope of our audit in the course of our review. Specifically, (1) we were not afforded the opportunity to ask individual commissioners about their rationale for key decisions, such as the amount and timing of the bond issuances; (2) the independent audit firm denied our request and similar requests by the Commission that we be granted access to its workpapers documenting its assessment of the Commission’s internal controls; and (3) the chairperson of the Commission terminated our audit work at the Commission on October 2, 1994, before we had obtained complete details of issues under review. The chairperson objected to the nature of our questions, our alleged predetermined attitude, questioning of judgments within the sole prerogative of the Commission, and our request to interview individual commissioners. Because our audit work was terminated, we were unable to expand our review of internal controls and cannot comment on the degree to which the internal control issues we discuss in chapter 3 are representative of other operational areas at the Commission. Except as noted above, our work was conducted in accordance with generally accepted government auditing standards. The Commission provided detailed comments on a draft of this report. The comments included a letter from the Commission’s general manager and 17 exhibits. Because of the voluminous nature of the comments, they have not all been included in this final report. However, we have reviewed and analyzed the comments and materials provided by the Commission and have revised and updated the report as appropriate. The executive summary of this report summarizes the Commission’s most significant comments, and appendixes I and II include the general manager’s letter, a memo prepared by the Canadian Commissioners, and our responses to them. In addition, at the end of chapters 2 and 3, we have summarized the Commission’s response to our suggestions for improvements and our evaluation of their responses. The Commission has experienced difficulties in implementing major projects in its capital plan that have resulted in delays and postponements. The Commission’s bond offering statements projected completing major improvements at the Rainbow Bridge by January 1996, but it has yet to obtain all key agreements and clearances needed to proceed with this project and now expects to complete this project in late 1997 or 1998. In addition, the Commission had planned to convert the Whirlpool Rapids Bridge from a local traffic corridor to a major commercial truck and passenger vehicle route by July 1996, but plans for this project have been postponed. The Commission’s plans to upgrade this bridge in the near term differed in important aspects from regional transportation plans for this bridge, and major road connections needed to make this route viable for commercial truck traffic have not been agreed to. The Commission issued $121 million in tax-exempt bonds in 1992 to finance its capital program and refinanced this debt by issuing $133 million in bonds in 1993 to take advantage of the very favorable interest rates available at that time. The Internal Revenue Code requires that for a bond to be eligible for tax-exemption, bond issuers must have a reasonable expectation of using the proceeds of the bonds within certain time frames. To address this requirement, when the Commission issued its bonds in 1992, it stated that it reasonably expected to use 85 percent of the spendable proceeds of the bonds within 3 years. Because of delays in the capital program, this has not occurred. Since the application of tax laws and regulations is within the jurisdiction of the Internal Revenue Service (IRS), it would be inappropriate for us to offer opinions of the application of these laws and regulations to particular factual situations. As of August 31, 1994, the Commission had spent over $5 million for consultants to assist with its capital program and had incurred other costs of almost $6 million to finance its two bond issuances. The Commission has encountered significant delays in implementing its capital program. The Commission has experienced difficulties in finalizing key agreements on historic preservation and environmental impact, as well as permission for a land exchange needed to move forward with construction of the U.S. plaza on the Rainbow Bridge. The Commission has also indefinitely postponed work on the Whirlpool Rapids Bridge—its most expensive project—because of questions raised by Canadian agencies from which agreements would be required and because of a downturn in traffic. The nature and complexity of these projects meant that the Commission needed to meet numerous requirements for historic preservation and environmental assessment, obtain the agreement of the U.S. Department of the Interior for a land exchange, and coordinate with other entities on regional transportation plans. At least in part because it did not coordinate with other affected entities and obtain key agreements, the Commission’s projects have been delayed. At least five federal and state agreements were needed in conjunction with the first segment of the capital program—work on the U.S. plaza of the Rainbow Bridge. Although the Commission began efforts to seek agreements as early as 1990, it did not enter into a formal agreement on the process for obtaining them until May 1995, in part because of misunderstandings about the types of approvals needed and the appropriate authorities from whom the approvals and agreements were needed. Hence, the Commission has incurred additional costs for the partial redesign of this project, and the Rainbow Bridge project has been delayed. The National Historic Preservation Act requires federal entities to take steps to protect National Historic landmarks from the potential adverse effects of proposed federal projects. Specifically, the act and its accompanying regulations require broad consultation among the independent federal Advisory Council on Historic Preservation (ACHP), the State Historic Preservation Officer (SHPO), affected agencies, and the public, resulting in an agreement on actions to mitigate any adverse effects of the project. The act applies to the Rainbow Bridge project because (1) the Commission leases space in its buildings to federal agencies for customs and immigration activities; (2) the bridge and its related structures are eligible for listing on the National Register of Historic Places; and (3) the plaza is on an easement within the boundaries of the Niagara Reservation, which has been designated both an endangered National Historic Landmark and a National Natural Landmark. The National Environmental Policy Act and the State Environmental Quality Review Act, like the National Historic Preservation Act, require that potential adverse impact be identified and mitigated and call for consultation with affected agencies. Federal guidelines require the entity undertaking a project to determine the appropriate level of environmental review, which can range from the completion of a checklist in response to plans for certain repairs or alterations, to an environmental assessment that briefly analyzes a project’s impact, to a full environmental impact statement for major undertakings. Federal guidelines include indicators to determine whether an area is environmentally significant and may require a full study. The Rainbow Bridge project met at least three of these indicators: it is located near a unique geological feature, lies within parklands, and is likely to affect historic properties. Issues considered under the National Historic Preservation Act overlap with issues of the environmental analysis. The review processes for both the National Historic Preservation Act and for federal and state environmental laws are consultative processes designed to identify all adverse effects of proposed projects, to consider alternatives, and to identify measures that could be taken to mitigate any adverse effect of the project. While the review process under the National Historic Preservation Act is concerned with accommodating historic preservation concerns, federal and state environmental reviews are more broadly focused. Environmental reviews are designed to determine whether the project may affect its surroundings including, among many elements of consideration, historic places. The results of the historic and environmental analyses are also considerations in the approval of a land exchange needed for the Rainbow Bridge project. The federal Land and Water Conservation Fund Act of 1965 provides that any lost parklands, including easements, must be replaced with land of equal or higher value. As planned, the Rainbow Bridge project required about half an acre of additional land to expand the U.S. plaza—land that is within the Niagara Reservation State Park. For the parcel of parkland adjacent to the Rainbow Bridge, the Commission proposed to exchange land near the Rainbow Bridge for which it now holds an easement and land near the Whirlpool Rapids Bridge. The exchange of this state parkland required agreement from the New York State Office of Parks, Recreation, and Historic Preservation. Because the Niagara Reservation had received funding from the Land and Water Conservation Fund, the exchange also requires the Department of the Interior’s approval. A factor the two agencies consider in granting their approval for such exchanges would be the results of the historic preservation and environmental consultation processes for the project. Under federal regulations, the SHPO is a key participant in the historic preservation review process and must be consulted. Furthermore, the regulations require that the ACHP be involved in the consultation process when a National Historic Landmark may be adversely affected. The regulations recommend that these consultations take place as early in the planning as possible to provide maximum flexibility in resolving any identified conflicts. In 1990, the Commission took two steps it regarded as initiating coordination for the historic review process. However, neither of these steps directly involved the SHPO or the ACHP. As a consequence, the views of key officials were not obtained until the process had been underway for 2-1/2 years. The first step the Commission took to involve historic preservation interests in the project was to include an employee of the state’s Office of Parks, Recreation, and Historic Preservation on the project’s design selection committee in 1990. In New York, the SHPO is a designated official within the state’s Office of Parks, Recreation, and Historic Preservation. The Commission has stated that it included the employee on the design selection panel for the purpose of ensuring input regarding historic preservation, and that since the design selection was unanimous, historic preservation interests were addressed. However, the state park’s representative on the panel was not authorized to represent the state of New York in making decisions regarding the historical preservation of properties within the Niagara Reservation. Federal regulations specify that the SHPO is the appropriate official to represent the interests of the state in preserving its cultural heritage. In 1990, the Commission took the second step by initiating coordination with the Deputy Commissioner for Planning and Development of the state’s Office of Parks, Recreation, and Historic Preservation. This official consulted with the Commission regarding the land exchange necessitated by the project. Commission officials cited an October 1990 letter from this official as signifying his intent to guide them in their historic preservation and environmental consultation processes and to act as their liaison to the state agency. This letter commented on land use matters and cited concerns about the Commission’s plans and notes that the Rainbow Bridge proposal represented an adverse effect on a National Historic Landmark. The letter directed the Commission to contact and consult with specified agencies and individuals, including the SHPO and the ACHP. However, this official told us that his role was principally limited to land acquisition and usage and that he was not responsible for the historic preservation review process. The official could not explain why the Commission had the impression that he was coordinating the historic preservation aspects of the Rainbow Bridge project for the state since he was not the SHPO. The Commission’s mistaken reliance continued for 2-1/2 years, during which time it had no contact with the SHPO or the ACHP. The SHPO was aware of the Rainbow Bridge project but said that she had not been consulted by the Commission in accordance with federal regulations. The SHPO maintains that, due to the large number of projects requiring historic preservation determinations, those initiating a project have the responsibility under federal and state law to consult with her office before any final decision is made to proceed. In November 1990, Commission consultants together with various representatives from the state’s Office of Parks, Recreation, and Historic Preservation agreed that an environmental assessment would be the appropriate level of analysis for the Rainbow Bridge project. While an environmental assessment may have been an appropriate first step, the project had at least three features that suggested a full environmental impact statement might be needed. Acting on this agreement, the Commission’s consultants began an environmental assessment in late 1990. The first product of the assessment process was a November 1991 report on the effect of the proposed project on historic properties that concluded that although the project would directly affect the National Historic Landmark Niagara Reservation, it would have no significant adverse impact. However, this report was prepared without input from either the SHPO or the ACHP. The Commission’s consultants then produced a draft environmental assessment and issued it for comment in December 1992. The draft assessment concluded that the proposed project would have no significant averse impact. During the subsequent comment process, the SHPO’s concerns surfaced about the project’s visual impact on the Niagara Reservation. In March 1993, the SHPO disagreed with the finding of no significant impact. Both the SHPO and the Commission told us that this was the first instance in which the Commission was notified of such concerns. The SHPO urged the General Services Administration (GSA), which leases space from the Commission to house federal customs and immigration operations at the bridge and which was familiar with historic preservation and environmental reviews, to take the lead in seeking the needed agreements. GSA officially assumed responsibility for coordinating the historic preservation and environmental reviews for the Rainbow Bridge project in June 1993 and has begun to obtain the needed agreements. As required by the National Historic Preservation Act, GSA formally notified the SHPO and the ACHP of the project on June 30, 1993. ACHP then requested assistance from its consulting agency, the National Park Service, to assess the project’s potential impact on the Niagara Reservation. In December 1993, the National Park Service concluded that the proposed project was incompatible with the setting. Subsequent to this report, the Commission decided to amend its design for the Rainbow Bridge plaza. In April 1994, GSA announced that it would require a full environmental impact statement for the Rainbow Bridge project. A preliminary draft of this document was available for public comment in December 1994. The final draft of the document was delayed to allow consultation under the historic preservation review process, which was largely completed in May 1995. GSA expects to announce the availability of the final draft of the environmental impact statement by the end of July 1995, and, if no further substantive comments are received, it expects to complete the environmental review process with a record of decision in late August. On several occasions, the Commission’s general manager told us that the Commission and its consultants had relied on the Deputy Commissioner for Planning and Development in their decision to perform an environmental assessment instead of a full environmental impact statement and relied on him to conduct their coordination with other units within the Office of Parks, Recreation, and Historic Preservation. In commenting on a draft of this report, the Commission said that the advice received from that state official was from the person whom they thought to be the “person in charge of” the Office of Parks, Recreation, and Historic Preservation. However, federal regulations clearly require consultation with the SHPO and ACHP, and the Deputy Commissioner told us that he could not explain why the Commission misunderstood the requirements. With 80 percent of the project’s design completed, the Commission, in January 1994, resolved to make substantial changes to the proposed design. As a result, the Commission incurred redesign costs that it estimates at about $300,000. In May 1995, the Commission, GSA, ACHP, the SHPO, and the National Park Service signed a memorandum of agreement as required by federal regulations implementing the National Historic Preservation Act. The parties agreed that further design of the project will be reviewed at specific points, as well as on processes to resolve any differences of opinion concerning the project. The May 1995 agreement also indicated that the SHPO would recommend to the New York State Office of Parks, Recreation, and Historic Preservation that it seek approval from the National Park Service to execute the land exchange required for completion of the project. The agreement further indicated that the National Park Service would expeditiously approve the land transfer upon such request from the state. The Commission’s May 1992 bond offering stated that the Rainbow Bridge project would be completed in January 1996, but the Commission now expects to resume construction at the Rainbow Bridge at the end of 1995 and to complete it by the end of 1997 to mid-1998. Capacity improvements at the Whirlpool Rapids Bridge, at an estimated cost of $118 million, were to have been the most expensive of the three projects funded by the Commission’s bonds. The improvements would permit use of the bridge for the first time by large commercial trucks and included the construction of warehouse and inspection facilities for commercial vehicles. This expanded usage would be accomplished by altering the bridge’s upper deck for use by large trucks with access to that level from roadways and plazas to be constructed by the Commission. The new approaches to the bridge would be connected to local roads for the near term and later to major highways. The bond offering statements show that construction was to have occurred from June 1994 to July 1996. The improvements required coordination with other agencies and a number of agreements before project initiation. Specifically, the Commission needed approvals for road connections to make the project viable, environmental analyses in both the U.S. and Canada, and additional land acquisitions in both countries. While the Commission moved forward with plans for improving Whirlpool Rapids Bridge, New York State and Province of Ontario agencies were conducting ongoing studies—one of which included Commission representatives—to assess regional transportation needs. The findings of these studies differed somewhat with Commission plans for this bridge. In light of the issues raised by these two transportation studies and a downturn in traffic volume, the Commission and its consultants reviewed the status of its capital program in July 1994 and postponed the Whirlpool Rapids Bridge project. As a result, the Commission may have incurred expenses that may have limited value if and when this project is resumed. In proceeding with its plans to upgrade the Whirlpool Rapids Bridge, the Commission incurred expenses for environmental impact studies and an option to purchase land that would be needed to upgrade this bridge. First, the Commission anticipated the need for full environmental impact studies in both countries because the project would cause some major changes in land use. The studies were begun in mid-1992. As of August 31, 1994, the Commission had spent about $500,000 on environmental studies, which have been suspended. While the work done thus far may be usable if the project eventually proceeds as planned, it may be of limited value if the major improvements at the Whirlpool Rapids Bridge are not needed in the near future. Additionally, the Commission purchased an option on land that would be needed when the Whirlpool Rapids Bridge project got under way. The project as originally planned required about 50 acres of land owned by the Canadian National Railway for the construction of approach roadways and inspection facilities. Rather than purchase the land outright, the Commission purchased an option in June 1993 to maintain flexibility for phasing in the capital program. As payment, the Commission placed $15.5 million in Canadian currency in escrow, with the interest accruing to Canadian National. Because of uncertainties about the future of the Whirlpool Rapids corridor and concerns about potentially high environmental cleanup costs, the Commission terminated the option in June 1994. We estimate that interest foregone by the Commission was about $875,000 when converted to U.S. currency. After the Commission issued its Thirty-Year Plan, two studies were conducted by regional transportation agencies which resulted in recommendations for the Whirlpool Rapids corridor that conflicted in some way with the Commission’s plans. One study questioned the routing of large commercial trucks over the Whirlpool Rapids Bridge and construction of the related commercial vehicle warehouse inspection facilities in the bridge plaza areas, while the other study questioned highway connections the Commission had planned. The resulting uncertainty about fundamental elements of the planned project was a major factor in postponing the project. A joint U.S.-Canadian study of Niagara River bridges, initiated in November 1990, resulted in recommendations for the Whirlpool Rapids Bridge that differed from the Commission’s plans. The Niagara Frontier U.S.-Canada Bridge Study was jointly sponsored by transportation planning agencies of New York State and the Province of Ontario to assess regional transportation needs. Issued in March 1993, the study recommended short-, medium-, and long-range plans for the Commission’s three bridges, as well as for the Peace Bridge in Buffalo. The study disagreed with the Commission’s plans to route large commercial trucks across the bridge, and for the near term, the study recommended smaller changes in contrast to the Commission’s major construction plans. For the period prior to the year 2000, the study recommended only changes to the Whirlpool Rapids Bridge plazas and approaches in contrast to the Commission’s plans to construct roadways, truck inspection stations, and warehouse facilities by 1996. For the period after 2000, the study recommended upgrading the bridge’s upper level as one alternative to be explored for relieving anticipated congestion. However, in contrast to the Commission’s plans, this study did not envision the use of the upper level by large commercial trucks. The study recommended that use of the bridge be restricted, as it currently is, to open bed and single commodity trucks, and the plan specified that provision for commercial vehicle warehouse inspection facilities, planned by the Commission, not be made. While the bridge study was under way, another study was started that drew into question the major highway linkages that were needed to make this project feasible. The Commission’s Thirty-Year Plan called for it to acquire land and construct roadways from the bridge out to major local streets by 1996. In the longer term, connection would be made by U.S. and Canadian agencies to major highways in each country. The second study, TransFocus 2021, was issued in draft for comment by the Province of Ontario in April 1994 and finalized in April 1995. The study called for an environmental assessment as well as a study of the feasibility and timing of linking the Whirlpool Rapids Bridge with highway 420 in Canada, rather than the linkage with Canadian highway 405 which the Commission had planned. This change would impact the land required by the Commission in Canada. Canadian officials expect resolution of the question concerning highway routes by mid-1996, at which time an environmental assessment could be initiated to determine the impact of expansion of the bridge corridor. Agreements would also be required in New York before the Whirlpool Rapids corridor could be upgraded. New York State’s area transportation planning organization approved the project for its long-range plan in December 1993. This process recognizes plans for road connections to the interstate highway system sometime after 1999 but does not identify any funding for these connections. State transportation officials told us the Commission began coordinating with them when they initiated the environmental impact study on this project; this study would have identified all needed agreements and clearances. However, that effort was halted in January 1994. In light of the recommendations of these two transportation studies, a downturn in the volume of bridge traffic, and other issues, the Commission in July 1994 postponed the project, the completion of which was scheduled for mid-1996. In conjunction with this decision, the Commission also discussed, but did not resolve, the issue of early retirement of some of the Commission’s debt. As a result of the questions raised about the future of the Whirlpool Rapids Bridge, work planned by the Commission for the Lewiston-Queenston Bridge has also been postponed. Although some work has been completed on this bridge, the Commission has begun to explore alternatives for reconfiguring the Lewiston-Queenston Bridge to increase its capacity beyond that envisioned by the original plan. The revisions would permit this bridge to absorb some of the traffic that the upgraded Whirlpool Rapids Bridge would have handled. In May 1992, the Commission financed its capital program with $121 million in tax-exempt bonds, and in July 1993, the Commission refinanced this debt by issuing $133 million in such bonds. In 1992, the Commission stated that it reasonably expected at least 85 percent of the spendable proceeds of its bonds would be expended within 3 years of May 20, 1992, the date of release of the Commission’s original bond issuance statement. Because of delays in implementing the capital program, the Commission has not achieved this level of expenditures. The Internal Revenue Code requires that bond issuers have a reasonable expectation, at the time of bond issuance, that the bond proceeds will be spent within certain time frames. To save bond issuance costs and lock in favorable interest rates, the Commission issued sufficient bonds to cover cash needs for work on all three existing bridges rather than financing each project separately. When the Commission refinanced its bonds in 1993, it obtained an average of a 5.4-percent interest rate, which is a historically low long-term rate over the last 30 years, according to the Commission’s bond counsel. The bonds also include a provision that permits the redemption of most of the Commission’s bonds at full face value if the Commission’s engineers certify that all or part of the capital program cannot be carried out or has to be curtailed. The Commission has spent funds for consulting fees, reconfigured truck lanes on one bridge, installed some automated toll equipment, and widened the U.S. plaza on the Rainbow Bridge to accommodate additional toll and inspection booths. As of August 31, 1994, the Commission still had about $43 million U.S. and $64 million Canadian (a total of $90 million if Canadian funds are converted to U.S. currency) in bond proceeds available. However, because of questions about the planned upgrading of the Whirlpool Rapids Bridge, the Commission may not need all of these funds in the near term unless the cost of the work on its other bridges expands to require more funding. The Internal Revenue Code includes several restrictions on the usage of tax-exempt bonds. Among these rules are restrictions on hedge bonds enacted to prevent the early issuance of bonds to hedge against potential future increases in interest rates. Under the Internal Revenue Code, unless the bond issuer reasonably expects that 85 percent of the spendable proceeds of a bond issue will be spent in 3 years from the date of issuance, the bonds may be considered hedge bonds. If the bonds are hedge bonds, the bonds will not be considered tax-exempt unless the issuer has a reasonable expectation of spending: 10 percent of the spendable proceeds of the issue within 1 year of issuance; 30 percent within 2 years; 60 percent within 3 years; and 85 percent within 5 years. The Commission stated in its bond offering statement in May 1992 that it reasonably expected at least 85 percent of the spendable proceeds of its bonds would be expended within 3 years of May 20, 1992. However, due to delays in implementing the capital program, this has not occurred. The key question, however, is not whether the proceeds are actually spent within these time frames, but rather whether the bond issuer had a reasonable expectation of doing so at the time the bonds were issued. During our work, we met with IRS officials to discuss generally the application of the hedge bond rules, as well as the Commission’s financing circumstances. These officials told us that the IRS considers a number of factors in taking action in such situations but would not discuss the particular circumstances of the Commission’s bonds. Since the application of tax laws and regulations is within the jurisdiction of the IRS, it is our policy not to offer opinions of the application of these laws and regulations to particular factual situations. Because the Commission lacked expertise on its own staff, it retained attorneys, engineers, architects, underwriters, and bond counsel to assist in coordinating with outside entities, obtaining needed agreements, designing its projects, and financing its capital program. The Commission had spent over $5 million on consultants and other advisers as of August 31, 1994. (See table 2.1 for a listing of the Commission’s costs for consultants and advisers.) Furthermore, in addition to payments to consultants, the Commission has incurred almost $6 million in costs to finance the two bond issues for its projects. (See table 2.2 for a listing of bond issuance costs.) In addition, each of the bond offerings sold at a discount, which had the effect of reducing the proceeds to the Commission. The discounts on the two bond issuances totalled over $5 million (about $3.67 million for the 1992 bonds and $1.45 million for the 1993 bonds). OSC’s municipal financing specialists evaluated the costs of the 1992 and 1993 bond issuances for their reasonableness. The specialists said that given the international nature of the Commission and its newness to the tax-exempt bond market, the costs of both the 1992 and 1993 issuances appeared to be reasonable. The specialists commented, however, that charges by the underwriter totaling about $120,000 for clearance and for computer and communications are either not typically paid by issuers or, if paid, are charged at much lower levels than charged to the Commission. The specialists could not assess the reasonableness of the bond counsels’ costs because these costs were not supported by detailed billings. The Commission’s capital program is a complex and sophisticated undertaking that required extensive coordination and agreements, as well as entry into the capital bond market. In May 1992, the Commission issued $121 million of tax-exempt bonds. However, the majority of the bond proceeds remain unspent because each of the Commission’s projects has been delayed or postponed. Delays at the Rainbow Bridge plaza project occurred largely because the Commission did not obtain needed governmental clearances. In May 1995, the Commission entered into a memorandum of agreement for the Rainbow Bridge plaza with several federal and state agencies that identifies the roles and responsibilities of all parties and lays the groundwork for moving forward with this project. The future of the Commission’s plans for the Whirlpool Rapids Bridge project, however, is less certain. In July 1994, the Commission deferred the schedule for the Whirlpool corridor until such time as the transportation plan being developed by the Canadian Ministry of Transportation is more firmly developed and the New York State Department of Transportation is ready to schedule a connection to the Commission’s Whirlpool facilities. IRS rules require that bond issuers have a reasonable expectation of spending tax-exempt bond proceeds within certain time frames. While project delays have caused the Commission to not expend the bond proceeds as it anticipated, the key question is whether it had a reasonable expectation of doing so at the time the bonds were issued. Because determinations on compliance with these requirements are within the jurisdiction of the IRS, we cannot speculate on whether the IRS would review the Commission’s bonds. Because neither GAO nor OSC has explicit audit authority over the Commission, we are not making any formal recommendations. Nevertheless, we identified a number of possible steps the Commission can take for improving the implementation of its capital improvement program. In order to ensure the orderly implementation and financing of its capital program, the Commission may wish to develop a formal update to its long-term capital program (A Thirty-Year Plan), including, as appropriate, plans to address the early retirement of debt resulting from funds derived from the 1992 and 1993 bond issues that may no longer be needed. It would be appropriate for this update to also include a strategy for obtaining the necessary input and/or agreements from appropriate transportation, environmental, historic preservation, and other involved agencies and associations before implementing its capital program. In commenting on a draft of this report, the Commission strongly objected to the inclusion in the report of any discussion of the tax-exempt status of its bonds and disagreed with the discussion of project delays for the Rainbow Bridge project. With regard to the discussion of the relationship of IRS rules to the Commission’s tax-exempt bonds, the Commission and its consultants said that any discussion of the tax-exempt status of the bonds could potentially have negative effects on the bondholders, was not rooted in fact, and suggested that any discussion of this issue should be withdrawn from the report. It was not our intent to create a perception that the tax-exempt status of the bonds is in jeopardy, and the report has been clarified to ensure that the reader is not led to this conclusion. However, the Commission certified that it reasonably expected to spend 85 percent of the bond proceeds within 3 years of issuance. Three years have passed since bond issuance, and less than one-third of the bond funds have been expended. Clearly, any decision related to this issue is properly within the purview of the IRS, and it would be inappropriate for us to speculate on the application of these laws and regulations. However, the standards under which we conduct our work require a review of compliance with laws and regulations that are significant to the audit objectives, one of which was the financing of the capital program. In light of the fact that the Commission has experienced delays in expending its tax-exempt proceeds as it had projected in its bond offering statement, we believe that we would have been remiss had we not included a discussion of this issue in the report. The Commission also objected to our treatment of the cause of delays in gaining the approvals necessary to move forward with the construction of the Rainbow Bridge project and said that the many approvals needed to proceed with this project are virtually completed. The discussion of this issue has been expanded to more clearly show the chronology of events and the Commission’s misunderstandings of the types and sources of agreements required that led to delays in this project. The Commission entered into a memorandum of agreement with appropriate federal and state entities in May 1995 (after the draft report had been provided to the Commission for comment) that establishes the framework for moving forward with this project. While the agreement is certainly a positive step toward project implementation, we believe that completing such an agreement earlier in the process could have precluded the delays the Commission has experienced on this project. On the other hand, the Commission said that it has taken our suggestions with respect to updating the Plan, the possible retirement of a portion of its debt, and the desirability of developing a strategy for obtaining necessary input and agreements from other agencies under advisement to the extent that these recommendations have not already been superseded by events. For example, the Commission said that it has largely completed the process of developing collaborative strategies with appropriate agencies involved in the environmental and historic preservation process. The Commission has recently taken steps, such as completing a memorandum of agreement in mid-May 1995 with several agencies regarding the renovation of the Rainbow Bridge plaza, that are moving this project closer to implementation. The overall theme of our suggestions, however, was not intended to be project specific, but rather to apply to the entire capital program. In this context, we continue to believe that the suggestions we made could be beneficial in the Commission’s management of its overall capital program. We performed a limited review of the Commission’s internal controls over its business affairs. We found that the Commission had new policies in place to guide procurement and the remuneration of commissioners but that in some instances it had not ensured that these policies were consistently followed. For example, over half the commissioners’ expense claims that we reviewed lacked proper approvals or were missing at least part of the required documentation. We found errors in payments made in both procurement and commissioner remuneration. The Commission has since taken action to recover the overpayments. Finally, because some attorneys’ fees were not supported by detailed billings, we could not assess the nature or reasonableness of the cost of the legal services provided. Recognizing the need for a comprehensive assessment of its internal controls and practices, the Commission plans to contract for such a review. No federal or New York State legislation specifically provides for oversight of the Commission. Because the Commission may benefit from periodic oversight by a governmental body, we have identified options for permanently designating a governmental entity to oversee the Commission’s operations. It is generally good business practice to obtain vendor competition for significant purchases of goods and services. To assess the Commission’s procurement procedures, we selected 51 of the 238 purchase orders issued by the Commission from January 1, 1993, through June 16, 1994, for review. Although 28 (of the 51) purchase orders were for amounts that exceeded $5,000, the Commission had no documentation in its files to indicate that it used vendor competition to obtain the goods and services in question. In responding to a draft of this report, Commission staff said that competition had not been used in 17 (of the 28) instances because of “unavoidable necessity.” However, documentation of the unavoidable necessities was not present in the Commission’s files. For the remaining 11 instances, Commission staff told us that vendor competition had been used but that it had not retained the quotations from the vendors that were not selected. Prior to January 1994, the Commission did not have written procurement policies and procedures. In January 1994, the Commission formalized its procurement policies and procedures to require some form of vendor competition and written contracts. Specifically, the Commission’s policy requires, when feasible, at least three written quotations for purchases greater than $5,000 and written contracts for purchases of more than $20,000 in one year from the same vendor. According to the Commission’s general manager, the Commission had been using the policies and procedures that were formalized in January 1994 for some time prior to that date. However, in responding to the draft report, the Commission also indicated that staff could not determine exactly which policy was in effect on what dates. Twenty-two of the 28 purchase orders that were for more than $5,000 were issued prior to January 1994, when the Commission formalized its procurement policies. The remaining six were issued after the policies were formalized. We found, however, that the Commission did not have formal contracts for three purchase orders, ranging in amount from $20,112 to $48,000, that were issued after the Commission formalized its policies. The Commission said that there were unique circumstances associated with four of the six orders issued after the policies were formalized. In one instance, for example, verbal (instead of written) quotations were obtained because time was of the essence or other factors mitigated the use of written quotations. However, documentation of such mitigating factors was not maintained in the files. During our work, we also noted that the Commission overpaid one vendor $1,100 for uniforms. Our limited review of payments to consultants disclosed three similar overpayments totaling about $2,300. The overpayments resulted from paying the same charges twice on separate account statements, not identifying an inaccurate invoice total, and paying an invoice credit balance. Commission staff informed us that the three overpayments have been or will be recovered from the vendors. In responding to the draft report, Commission officials indicated that, because our audit uncovered some actual control problems, they had issued a request for proposal for a major management and control review that would address procurement issues. The Commission did not consistently follow its policies when providing remuneration to the commissioners. As a result, some expenses were not properly authorized and/or documented. The Intermodal Surface Transportation Efficiency Act of 1991 authorized reimbursement to commissioners for actual expenses incurred in the performance of official duties and a per diem allowance of $150 when rendering service as a member. According to this federal legislation, the per diem is to be paid on a fiscal-year basis and should not exceed $10,000 for any commissioner in any fiscal year. Our review of the Commission’s fiscal year 1993 payments to commissioners found some errors in payments of the per diem allowance and inadequate documentation and authorization for expense reimbursements. Controls are in place to ensure that the per diem limit is not exceeded on payments to commissioners. Commission policy requires each commissioner to file a quarterly attendance report detailing the date and nature of the service rendered in order to receive an allowance. The quarterly reports are to be reviewed by the chairperson or vice chairperson of the Commission, one of whose signatures is required to authorize payment. The Commission has no written policy defining the circumstances in which a payment should be allowed. The general manager commented that the chairperson is familiar with the commissioners’ duties; he or she uses best judgment when determining the duties eligible for payment of the per diem allowance. We reviewed all per diem payments to commissioners during the Commission’s 1993 fiscal year. In that year, a total of $69,950 in per diem allowances was paid to the eight commissioners. In addition to being paid for Commission and committee meetings, per diem was paid for conferences, meetings, and public relations events. In no instance was a commissioner paid for an event that could not be construed as serving the Commission. Of the eight commissioners, three received the maximum allowable amount of $10,000 for that year. We found four errors in per diem payment amounts. Payments to one commissioner exceeded the maximum allowable by $1,550 because the commissioner’s expenses were tracked on a calendar-year basis instead of a fiscal-year basis as required by the 1991 act. This error had been recognized by the Commission before our review: part of the overpayment had already been deducted from this commissioner’s per diem allowance payments at the time of our review, and the rest was to be deducted before the end of the fiscal year. In three instances in fiscal year 1993, commissioners were paid two per diem allowances for one day. In one of these instances, a duplicate claim was mistakenly paid, while in the other instances, more than one function was served on a single day. In response to our questions, the Commission clarified its policy so that only one per diem payment will be provided for a day, regardless of the number of services rendered. The general manager reported that he requested that the commissioners adjust future per diem claims to provide for repayment to the Commission of the extra per diem amounts. Commissioners may also be reimbursed for travel and other Commission-related expenses, but the guidelines on reimbursing commissioners for Commission-related expenditures are very general. Commission policy requires that all expenses be fully documented on an expense report accompanied by receipts and submitted for approval by the chairman or the vice chairman. The policy provides examples of reimbursable and nonreimbursable expenses and notes that excessive expenses will not be reimbursed. The general manager said that it is up to each commissioner to apply judgment when making travel arrangements. He also said that the Commission prefers not to establish written guidelines for travel and other reimbursements because it considers commissioners’ requests for payment as generally reasonable and it wishes to maintain flexibility. We believe that a vague policy is undesirable because it is open to a wide range of interpretations. We reviewed all expense reimbursements to commissioners made in fiscal year 1993, which amounted to $12,328 U.S. and $18,195 Canadian. The commissioners generally filed the required quarterly expense reports. Most of these expenses were for mileage, transportation, and meals. The expenses were related to a variety of events, including Commission or committee luncheon or dinner meetings; meetings with a variety of federal, state, and local representatives; and professional meetings. Most of the claims for expense reimbursement we reviewed were clearly connected to Commission business. However, we noted claims for registration at a San Diego conference, which included meals, for two commissioners’ spouses. The Commission does not have a specific policy related to this issue, but the general manager said that it is common practice for public authorities to pay registration fees for spouses. The Commission did not pay for the wives’ travel. All required approvals and documentation were available for 45 percent of the commissioners’ expense claims paid in fiscal year 1993. Of the remaining 55 percent of the expense claim payments, 18 percent lacked approval, 22 percent had no documentation, and 15 percent contained both documented and undocumented expenses. The general manager acknowledged that procedures might not have been followed in some instances. The Commission sought legal assistance in coordinating its capital program with outside entities, obtaining needed agreements, and issuing bonds. The costs for these services were not supported by detailed billings. In 1991, the Commission retained the services of an attorney to function as special counsel to represent the Commission and appear for it before any federal, state, or local agencies and provide any other legal and public affairs service the Commission might require. This attorney was retained under a $100,000 annual retainer, which he received in quarterly payments in addition to expenses. His duties included serving as counsel for the 1992 bond offering, negotiating with federal and state agencies, and working to obtain authority for the Commission to issue bonds exempt from federal and state taxes. In addition to the $100,000 the attorney was paid in 1993, his affiliated law firm was separately paid over $61,000 for its work as counsel on the 1993 bonds. Fees paid to the attorney and his affiliated law firm totaled about $450,000 through August 31, 1994. Another law firm was also hired to assist with obtaining authority for the Commission to issue tax-exempt bonds and to serve as co-bond counsel. This firm was paid about $200,000 for its work on each of the two bond issuances. The principal attorney involved said that billings were based on hourly rates; he noted that the cost of the second bond issuance was similar to that of the first because of the complexity of the second issuance. We could not assess the nature or reasonableness of the cost of the legal services provided because neither the contracts for services nor the billings were sufficiently itemized. The municipal finance specialists with whom we consulted could not assess the reasonableness of these costs because the Commission did not enter into specific written agreements for the counsels’ services on the bond issuances. These specialists said that such agreements should define the services to be provided and estimate the resulting fees. Our review of the cost of the legal services was further hampered by invoices that did not provide details on the actual time spent or the rates charged for bond issuance efforts. Without these documents, it is not possible to assess whether the counsel’s billings were reasonable for the time and effort spent on the bond issuances. Since the Commission was created in 1938, it has grown from an operation managing one international bridge to one managing three bridges and a complex, long-term capital improvement program. As part of our work, we ascertained what external reviews of the Commission’s operations are required and have been performed. Neither the joint resolution of the U.S. Congress that created the Commission in 1938 nor the six subsequent amendments to date assign responsibility for any governmental entity to oversee or audit the Commission. Furthermore, no New York State legislation provides for oversight of the Commission. A Canadian audit official said that the Province of Ontario has also not reviewed the Commission’s operations. Consequently, no governmental entity has overseen the Commission’s activities in more than 50 years. The joint resolution did call for an accurate and publicly available record of bridge costs, expenses for operating and maintaining the bridge, and tolls collected. The Commission has submitted its books annually to Deloitte and Touche or its predecessor firm for review. The Commission itself has recognized this lack of oversight authority. This joint review was performed with the consent of the Commission. The Commission has said on several occasions that it considers itself a federal entity and has requested that federal legislation be passed to give GAO permanent authority for overseeing the Commission. However, there are numerous links between the Commission and New York State, including the fact that the Governor of New York appoints the U.S. commissioners, and the state has a long-term ownership interest in the bridges. Governmental oversight might also give the Commission access to advice on the planning, coordination, and financing of major capital projects, thereby helping it to avoid the kinds of problems the Commission has encountered. Advice on such issues as the timing and amount of bond financing and governmental requirements for major projects is available from some governmental entities that serve in this type of oversight capacity. New York State, for example, provides such advice on projects and their financing to similar state authorities and municipalities. Because neither GAO nor OSC has the explicit authority to audit the Commission, we are not making formal recommendations for improving Commission operations. We believe, however, the Commission may wish to consider taking steps to strengthen its compliance with existing Commission policies and procedures in such areas as (1) obtaining and documenting sufficient price quotations where required, (2) preparing written contracts for multiple purchases from the same vendor that exceed $20,000 within a year, (3) having the chairperson or vice chairperson review and approve all claims submitted by the commissioners, and (4) having commissioners adequately document claims for the reimbursement of travel expenses. To improve its internal controls over payments and ensure that commissioners receive remuneration only for appropriate expenses, the Commission may wish to consider developing formal policies and procedures to (1) preclude the duplicate payment of accounts payable balances, (2) ensure that totals shown on invoices and account statements have been calculated accurately, and (3) delineate clearly those expenses that will be covered by per diem and travel reimbursements to commissioners. Finally, to ensure reasonable payment of consultants for services rendered, it would be prudent for the Commission to require consultants to provide the Commission with detailed breakdowns of the amounts they bill it for professional services and establish the amounts and/or rates associated with specific professional services before such services are rendered and billed for. Several options are available for overseeing the Commission. One option would be to designate OSC as the permanent authority for overseeing the Commission. The state already audits similar state bridge commissions and is in a position to provide both audit oversight and advice on capital projects and financing. Additionally, the bridges will ultimately be conveyed to the state of New York and Canada once the bonds issued for the bridges and the related interest are paid off. The state’s oversight would then be consistent with the state’s responsibility for owning and operating the bridges. Another option would be to grant oversight authority to GAO or another federal entity. However, GAO’s primary function is to oversee the auditing of federal agencies and programs that spend federal funds, which the Commission does not do. If it is determined that governmental oversight by a state or federal entity is appropriate, the enabling federal legislation for the Commission would need to be modified. If the state of New York is to have oversight authority, that state’s law would have to be modified as well. In commenting on a draft of the report, the Commission recognized that some improvements may be needed in its management and internal controls and has issued a request for proposal to perform a comprehensive management and control review. We believe that this is a positive step, particularly if the proposed study includes the issues we have raised earlier in this chapter on possible steps for improving certain Commission operations. With regard to options for future oversight of the Commission, the Commission said its belief that the Office of the New York State Comptroller does not have jurisdiction over any entity similar to the Commission and also noted that it had previously requested that members of its congressional delegation seek legislation to give GAO authority to conduct periodic audits of its operations. While the Office of the New York State Comptroller does have experience auditing other entities similar to the Commission, such as the Peace Bridge in nearby Buffalo, New York, we believe that it would be inappropriate for us to specifically recommend whether OSC or GAO should be given permanent oversight authority over the Commission because such a decision is properly within the legislative purview of the federal and state governments. Nevertheless, we continue to believe that it may be beneficial for the Commission to receive periodic oversight from some appropriate governmental body.
Pursuant to a congressional request, GAO reviewed the operations of the Niagara Falls Bridge Commission, focusing on its: (1) efforts to finance and administer its 30-year capital program; and (2) internal controls used to ensure that its business affairs are conducted appropriately. GAO found that the Niagara Falls Bridge Commission: (1) is a complex undertaking that requires extensive coordination and agreements with several federal and state entities, as well as entry into the capital bond market; (2) began efforts to obtain agreements on historic preservation and environmental assessment as early as 1990, but its projects were delayed because of misunderstandings about approvals or agreements needed to implement the project; (3) financed its capital program in 1992 by issuing over $120 million in tax-exempt bonds and refinancing the debt a year later to take advantage of lower interest rates; (4) had new policies in place to guide the procurement and renumeration of commissioners but, in some instances, the Commission did not follow the policies consistently; and (5) made errors in payment for procurement and commissioner renumeration, and some attorneys' fees were not supported by detailed billings. In addition, GAO found that there are no federal or state laws that explicitly provide authority for governmental oversight of the Commission.
Established by Congress in 2000 as a separately organized agency within DOE, NNSA has the primary mission of providing the United States with safe, secure, and reliable nuclear weapons and maintaining core competencies in nuclear weapons science, technology, and engineering. To support this highly technical mission, NNSA relies on capabilities in several thousand facilities located at eight nuclear security enterprise sites that support weapons activities. These sites are owned by the government but managed and operated by private contractors, and each has specific research and development (R&D) and/or production responsibilities within the nuclear security enterprise. (See fig. 1.) In addition to implementing NNSA’s nuclear weapons programs, some sites also support additional missions such as U.S. Navy nuclear propulsion, nuclear nonproliferation activities, and work for other federal agencies such as the Departments of Defense and Homeland Security. NNSA’s Office of Defense Programs is responsible for NNSA’s weapons activities and oversees the sites’ management and operating (M&O) contractors to execute R&D and production work. NNSA reimburses its M&O contractors under cost-reimbursement-type contracts for the costs incurred in carrying out the department’s missions, and M&O contractors have the opportunity to periodically earn additional award fees and contract extensions based on annual performance assessments. Congress funds NNSA’s nuclear weapons mission through an appropriation titled Weapons Activities. Weapons Activities is organized by NNSA into 14 operating programs with more than 40 budget lines across four activity areas. In fiscal year 2009, Congress appropriated approximately $6.4 billion for Weapons Activities, broken down by NNSA into the four areas described in table 1. RTBF is the single largest program within NNSA’s Weapons Activities appropriation, with nearly $1.7 billion for fiscal year 2009, and encompasses 90 percent of NNSA’s funds designated in congressional spending directives for the Infrastructure area. A significant RTBF mission, executed through its Operations of Facilities subprogram, is to operate and maintain NNSA-owned programmatic capabilities in a state of readiness, ensuring that each capability—defined to include facilities, infrastructure, and supporting workforce—is operationally ready to execute programmatic tasks identified in ST&E and Stockpile Support. Congressional spending directives designated nearly $1.2 billion of the RTBF program funds in the Weapons Activities account, or about 70 percent, for RTBF Operations of Facilities at NNSA’s eight sites. (See app. II for additional discussion.) In 2006, NNSA and its sites sought to improve linkages between programmatic tasks and the facilities and infrastructure that support the nuclear weapons program. To do so, NNSA established three categories for its facilities and infrastructure that indicate the extent to which they are critical to the achievement of Stockpile Support and ST&E milestones: Mission Critical facilities and infrastructure—such as for nuclear weapons production, R&D, and storage—are used to perform activities to meet highest-level Stockpile Support and/or ST&E milestones, and without these facilities and infrastructure, operations would be disrupted or placed at risk. Mission Dependent, Not Critical facilities and infrastructure—such as for waste management, nonnuclear storage, and machine shops—play a supporting role in meeting Stockpile Support and/or ST&E milestones, and loss of these facilities and infrastructure would only disrupt operations so long as operations could not resume within 5 business days. Not Mission Dependent facilities and infrastructure—such as cafeterias, parking structures, and excess facilities—do not have direct linkage to Stockpile Support or ST&E milestones but support secondary missions or quality-of-workplace initiatives. Together, Mission Critical and Mission Dependent, Not Critical facilities and infrastructure are deemed “mission essential.” In fiscal year 2009, NNSA categorized its over 4,500 facilities and infrastructure in these three categories. Across the entire nuclear security enterprise, over 200 facilities and infrastructure were deemed Mission Critical and over 1,400 were deemed Mission Dependent, Not Critical. Directed Stockpile Work is the second largest program within NNSA’s Weapons Activities appropriation, with nearly $1.6 billion in fiscal year 2009. The Directed Stockpile Work program is executed through four subprograms: Stockpile Services, the largest of these subprograms, with $866.4 million in fiscal year 2009, builds on weapons activities facilities and infrastructure to provide the foundational capabilities to conduct R&D and production work applicable to multiple warhead and bomb types. According to NNSA, the capabilities supported with Stockpile Services funds enable the achievement of other Directed Stockpile Work missions. Weapons Dismantlement and Disposition, with $190.2 million in fiscal year 2009, supports efforts to reduce the inventory of retired nuclear weapons and their components. The Life Extension Program, with $205 million in fiscal year 2009, represents one of NNSA’s two subprograms focused on specific warhead and bomb types. The Life Extension Program funds efforts to refurbish and extend the expected stockpile lifetime of legacy warheads and bombs for 20 to 30 years. Stockpile Systems funding supports ongoing sustainment activities for the active nuclear weapons stockpile, such as the exchange of components with limited lives and weapon-specific assessments. In fiscal year 2009, Congress directed $328.5 million for these activities, which NNSA prioritized among specific weapon and bomb types. NNSA reimburses its M&O contractors for the costs incurred in carrying out NNSA’s missions. These include costs that can be directly identified with a specific NNSA program (known as direct costs)—for example, the costs for dismantling a retired weapon—and costs of activities that indirectly support a program (known as indirect costs), such as administrative activities. To ensure that NNSA programs are appropriately charged for incurred costs, M&O contractors’ accounting systems assign the direct costs associated with each program and collect similar types of indirect costs into pools and allocate them among the programs. Consistent with Cost Accounting Standards (CAS), M&O contractors must classify their costs as either direct or indirect, and once costs are classified, must consistently charge their costs. M&O contractors are required to disclose their cost accounting practices in formal disclosure statements, which are updated annually and approved by NNSA officials. M&O contractors’ cost accounting practices cannot be readily compared with one another because contractors’ methods for accumulating and allocating indirect costs vary—that is, a cost classified as an indirect cost at one site may be classified as a direct cost at another. NNSA has developed national work breakdown structures for RTBF Operations of Facilities and for Stockpile Services, management tools that define the scope of work associated with the two subprograms. (See app. II and app. III for these fiscal year 2009 work breakdown structures.) In March 2009, we issued a cost estimating guide, a compilation of cost estimating best practices from across industry and government. Among other things, these best practices discuss establishing product-oriented work breakdown structures, where a product is defined as an output and 100 percent of the work associated with achieving that output. Product- oriented work breakdown structures allow a program to track cost and schedule by defined deliverables, promote accountability by identifying work products that are independent of one another, and provide a basis for identifying resources and tasks for developing a program cost estimate. The ability to generate reliable cost estimates is a critical function, and a program’s cost estimate is often used to establish budgets. While individual M&O contractors account for the activities included in NNSA’s work breakdown structures according to their own accounting practices and these practices vary, NNSA is required to provide reliable and timely information on the full cost of its programs because this information is crucial for effective management of government operations and for oversight. Full costs include direct and indirect costs that contribute to programs, regardless of funding sources. To meet this requirement, NNSA needs complete and reliable information from its M&O contractors so that it can determine the full (or total) costs of its programs. We have previously reported on NNSA’s lack of managerial cost accounting systems for its programs, particularly with respect to stockpile life extension programs. NNSA cannot accurately identify the total costs to operate and maintain weapons activities facilities and infrastructure because of differences in sites’ cost accounting practices. NNSA does not require sites to report the total cost to execute their RTBF Operations of Facilities work scope, but the results of our analysis of sites’ responses to our data collection instrument showed that the total cost to execute the RTBF Operations of Facilities work scope likely significantly exceeds the budget for the RTBF Operations of Facilities program justified to Congress. Efforts are under way to revise NNSA’s work breakdown structure that includes RTBF Operations of Facilities. According to NNSA officials, once the revised work breakdown structure is fully implemented it will capture these total costs, but NNSA will not begin collecting this information until 2011. Each of the eight sites in the nuclear security enterprise has established its own practices for how to account for the activities necessary to operate and maintain weapons activities facilities and infrastructure. While individual M&O contractors are required to be CAS compliant, differences in their cost accounting practices preclude NNSA from being able to identify the total costs to operate and maintain the facilities and infrastructure essential to achieving Stockpile Support and ST&E program missions. These differences include determining (1) which weapons activities facilities and infrastructure individual sites support with RTBF Operations of Facilities funds, (2) which activities included in the RTBF Operations of Facilities work breakdown structure each site supports directly or indirectly, and (3) the additional funding sources sites use to support certain activities included in the RTBF Operations of Facilities work breakdown structure. (For a detailed discussion of the differences in M&O contractors’ cost accounting practices, see app. II.) For example, While NNSA has identified the Mission Critical facilities and infrastructure at each of its sites, NNSA does not require M&O contractors to pay for them with RTBF Operations of Facilities funds. In fiscal year 2009, Pantex fully funded the RTBF Operations of Facilities work scope at all of its Mission Critical facilities with RTBF Operations of Facilities funds. In contrast, LANL partially funded the RTBF Operations of Facilities work scope at the majority, but not all, of its Mission Critical facilities with RTBF Operations of Facilities funds. Six of the eight sites in the nuclear security enterprise reported to us that in fiscal year 2009 they allocated the costs of certain activities included in the RTBF Operations of Facilities work scope into indirect cost pools, including the costs of activities such as utilities purchasing and real property maintenance. These indirect cost pools are often funded through multiple funding sources. All sites used funding in addition to RTBF Operations of Facilities funds to pay for activities included in the RTBF Operations of Facilities work scope in fiscal year 2009. In response to our data collection instrument, site officials identified 11 sources of funding congressionally directed for other Weapons Activities programs and subprograms that they expended, in part, on activities they considered to be included in NNSA’s RTBF Operations of Facilities work breakdown structure. In addition, some sites have developed user fee or cost recovery models for multiprogram facilities. These models are generally based on charges to programmatic users based on rates applied to, for example, the square footage of a facility users occupy or the volume of waste they produce. User fees or cost recovery may be charged as direct costs to Weapons Activities programs as well as to other programs and projects, or they may be charged through an indirect cost pool. As a result of these differences, NNSA cannot reliably identify the total costs to operate and maintain these facilities and infrastructure across the nuclear security enterprise. Rather, NNSA officials can only accurately identify the direct costs to the RTBF Operations of Facilities program, and in some instances, the direct costs to other Weapons Activities programs. Senior NNSA officials in the RTBF Program Office acknowledged that NNSA does not know the sites’ baseline costs to fully execute RTBF Operations of Facilities work scope, and NNSA does not require M&O contractors to track their sites’ total operations and maintenance costs for weapons activities facilities and infrastructure. Instead, NNSA officials told us they rely on individual contractors to know this information for their sites as a basis for formulating budget requests; however, some contractors did not identify a total cost for their sites’ weapons activities facilities and infrastructure. For example, when we asked, M&O contractors from two sites—Y-12 and LANL—did not provide the total cost to operate and maintain weapons activities facilities and infrastructure at their sites. LANL did not provide this information because site officials could not determine the extent to which costs charged against indirect cost pools were associated with activities included in the RTBF Operations of Facilities work scope. Y-12 did not provide this information because, according to officials, while their management system is capable of identifying this information, it cannot do so readily with accuracy. The total costs to operate and maintain weapons activities facilities and infrastructure likely significantly exceed the amount NNSA justified to Congress in the President’s Weapons Activities budget request for RTBF Operations of Facilities and that Congress directed to NNSA’s sites in fiscal year 2009. While NNSA requires M&O contractors to report information on their direct costs to the RTBF Operations of Facilities program, NNSA does not require M&O contractors to report on the total sitewide operation and maintenance costs for their weapons activities facilities and infrastructure. NNSA officials acknowledged that a more accurate figure for total costs to support the enterprisewide work scope for RTBF Operations of Facilities would include these other funding sources M&O contractors use to operate and maintain weapons activities facilities and infrastructure. As reported above, when we asked, not all M&O contractors determined the total cost to operate and maintain weapons activities facilities and infrastructure at their sites. However, for the six contractors that did so, the cost to fully operate and maintain weapons activities facilities and infrastructure greatly exceeded the amount of funding for RTBF Operations of Facilities in fiscal year 2009. Congressionally directed RTBF Operations of Facilities funding for these six sites in fiscal year 2009 totaled approximately $558.6 million, but their estimated fiscal 2009 expenditures for this work scope drawn from all funding sources totaled approximately $1.1 billion. Officials from the two M&O contractors that did not provide the total costs to operate and maintain weapons activities facilities and infrastructure at their sites also told us that their expenditures for this purpose in fiscal year 2009 exceeded their congressionally directed RTBF Operations of Facilities funds, as funding from other programs also contributed. NNSA’s congressional budget justification for RTBF Operations of Facilities is not based on total cost information, and it does not fully support the scope of work it describes. NNSA officials and M&O contractors told us that RTBF program representatives from all of the sites are working closely together and with NNSA to develop an updated national RTBF work breakdown structure that will be integrated into a larger national work breakdown structure for all of the activities overseen by the Office of Defense Programs. The revised Defense Programs work breakdown structure, once implemented, is to more closely align activities, including RTBF activities, at the sites with the nuclear weapons R&D and production capabilities they support. Moreover, according to NNSA officials, NNSA envisions the sites using the revised work breakdown structure for budget formulation, budget execution, and cost collection, unlike the current RTBF work breakdown structure, which is used only for program management during a single fiscal year. NNSA has asked that the sites begin submitting their RTBF program budget requests using the revised Defense Programs work breakdown structure format. NNSA and site officials agreed that the revised work breakdown structure should help better explain how RTBF supports the core missions of the weapons complex and the base capabilities needed to support those missions. NNSA officials expect the first phase of revisions to the Defense Programs work breakdown structure to be completed around the end of 2010. Starting in 2011, NNSA officials said they plan to begin efforts to further enhance the revised work breakdown structure by including total cost information for operating and maintaining weapons activities facilities and infrastructure to support future budget formulation activities. While this total cost information will not be wholly captured within the portion of the revised Defense Programs work breakdown structure associated with RTBF Operations of Facilities, according to NNSA officials total cost information will be captured in the revised work breakdown structure as a whole. Differences in how sites pay for RTBF Operations of Facilities activities—and weapons activities facilities and infrastructure—will persist under the revised work breakdown structure. However, NNSA officials said once the revised Defense Programs work breakdown structure is fully implemented, NNSA will have a tool to collect consistent cost information from contractors’ disparate cost accounting systems. While in total NNSA’s Stockpile Services work breakdown structure for fiscal year 2009 reflects $866.4 million in work scope as justified to Congress, the work breakdown structure does not fully identify or provide the estimated costs of the products or capabilities supported through the Stockpile Services program. Rather, the work breakdown structure is organized largely around work functions and only partially by specific products or capabilities. (See app. III for a more detailed Stockpile Services work breakdown structure.) NNSA officials told us that the largely functionally oriented work breakdown structure for Stockpile Services in total captures all the work activities associated with providing foundational programmatic capabilities for R&D and production capacity across the nuclear security enterprise. In addition, they said the work breakdown structure for Stockpile Services is a useful management tool for executing work functions across products and deliverables. However, the organization of much of the work breakdown structure precludes the ready identification of base capabilities and their costs. For example, the activities included in the Stockpile Services work scope range widely–– from basic infrastructure support to the manufacturing of actual weapons components––often without specifically identifying the products or capabilities they are supporting. The exception is Plutonium Sustainment, the one group that is product-based and that better aligns work activities with the product or capability it is ultimately supporting. The five work activity groups in the Stockpile Services work breakdown structure are as follows (see app. III for more detailed descriptions of these activities): Production Support ($293.1 million in fiscal year 2009) includes non- weapon-type specific or multi-weapon-type activities that a site performs to support its own production mission, whatever that mission might be. Examples of these activities include engineering and manufacturing operations; quality supervision and control; and tool, gauge, and equipment services. Management, Technology, and Production (MTP) ($195.3 million in fiscal year 2009) includes activities that (1) sustain and improve stockpile management, (2) develop and deliver weapon use control technologies, and (3) result in production of weapons components for use in multiple warhead and bomb types. In contrast to Production Support activities that are focused on individual sites’ production missions, MTP includes those activities that benefit the nuclear security enterprise as a whole. R&D Certification and Safety ($187.6 million in fiscal year 2009) provides the underlying capabilities to mature basic research conducted in ST&E programs and serves as a technology development bridge between research and weaponized technologies. Activities support design work to develop certain multisystem limited life weapon components; the specialized facilities, equipment, and personnel to maintain a base capability to perform hydrodynamic tests and subcritical experiments; and the preparation of various types of studies. R&D Support ($35.1 million in fiscal year 2009) consists largely of administrative and infrastructure support activities for sites’ R&D missions. These activities include program management for and coordination of Stockpile Services’ many different outputs, R&D quality control, computing hardware for personnel, and financial database maintenance. Plutonium Sustainment ($155.3 million in fiscal year 2009) captures work activities associated with pit manufacturing and related R&D, as well as associated indirect and overhead costs. These funds not only support the base capabilities for pit manufacturing, but also contribute to the operation and maintenance of the facilities and infrastructure necessary to conduct these activities and the actual manufacturing of a limited number of pits each year. According to SNL officil, etween $2.7 million nd $12.2 million in fil yer 2009 Prodction Support fndid for ctivitie tht cold e conidered RTBF Opertion of Fcilitie work cope relted to operting nd intining SNL’s netron genertor fcilitie nd infrastrctre. designed and manufactured at SNL. Activities associated with neutron generator R&D and production are distributed across several parts of the Stockpile Services work breakdown structure and are not combined by NNSA either to provide a total accounting of the activities necessary to sustain the neutron generator capability or to determine the total costs of these activities. Furthermore, common support costs—such as program management—are not allocated to the neutron generator capability. Our cost guide states that common support costs should be included in the work breakdown structures of their associated products or capabilities. In fil yer 2009, NNSA nd it M&O contrctor pent $45.9 million in MTP prodction fnd to provide the cabilitie, teter, engineering rerce, nd dgement tool to fcilitte enterpriewide interprettion of d nd informtion regrding the condition of tem, substem, nd component in the tockpile. Thi work support assssment of rhed reliability, as well as ongoing labortory safety, ecrity, nd use control evuation. Surveillnce cabilitie inclde imting nd teting the effect of vition, hock, ccelertion, temperre, nd rdition environment on wepon nd their component. The illustrtion elow how ker table, used for teting the effect of vition on wepon component. surveillance testing is used to assess the condition of systems, subsystems, and components in the stockpile. According to an NNSA official, surveillance costs in Stockpile Services for fiscal year 2009 could be as high as $100 million to $130 million, depending on the extent to which costs are included for activities that support both surveillance and other capabilities. For example, certain tools may be used for surveillance and for other production missions. The $45.9 million identified as the costs for surveillance do not include the costs to maintain or upgrade those tools. Rather, NNSA tracks the costs for tooling as a function across all products and capabilities. In its fiscal year 2011 congressional budget justification for both R&D Certification and Safety and MTP, NNSA discusses funding to support the surveillance testing capabilities. The budget justification provides no explanation for why funding in both activity groups is requested and does not identify the total amount requested for Stockpile Services surveillance activities. NNSA’s ongoing effort to revise the Defense Programs work breakdown structure includes revising the portion associated with Stockpile Services. Its primary purpose is to provide better evidence to support assertions made in congressional budget justifications. Our analysis shows the revised work breakdown structure, once fully implemented, will better identify products and capabilities supported through Stockpile Services and provide improved total cost information. NNSA is planning to “tag” individual activities in the revised Defense Programs work breakdown structure, including Stockpile Services activities, to identify the products and capabilities with which those activities are associated, where possible. This will allow officials to aggregate activities (and their costs) by product or capability as necessary within the Defense Programs work breakdown structure. NNSA officials also said that current plans include tagging indirect or overhead costs. According to NNSA officials, fully realizing the revised Defense Programs work breakdown structure will give federal program managers a tool to collect consistent cost information from disparate contractor cost accounting systems on the products supported through Stockpile Services. Reducing the stockpile size, as has recently been negotiated in the New Strategic Arms Reduction Treaty, if ratified, and reinforced by the 2010 Nuclear Posture Review, is unlikely to significantly affect NNSA’s RTBF Operations of Facilities and Stockpile Services costs, which represent about one-third of NNSA’s total nuclear weapons program budget. A sizable portion of these costs are fixed and represent the costs of maintaining the base capabilities necessary to ensure that the nuclear weapons stockpile continues to be safe, secure, and reliable without underground nuclear testing. NNSA and its sites are working to reduce fixed costs and to bring these costs into line with base capabilities by modernizing and downsizing facilities and infrastructure and by eliminating excess production and experimental capacity. However, NNSA lacks information on the costs of these base capabilities that could adequately justify planned budget increases, particularly with respect to infrastructure investment. NNSA and site officials identify the scope of work captured in the RTBF Operations of Facilities and Stockpile Services work breakdown structures as providing the base capabilities necessary to conduct the ST&E and system-specific work that ensures the continued safety, security, and reliability of the nuclear weapons stockpile without underground nuclear testing. According to NNSA and site officials, most of the base capabilities these programs provide would be necessary to maintain even if the size of the stockpile were significantly reduced. Furthermore, NNSA and site officials identify the majority of the costs associated with these base capabilities as fixed and thus relatively insensitive to stockpile size. NNSA recently analyzed its fiscal year 2008 costs to determine the extent to which these costs represented the fixed or variable costs of sustaining the nuclear security enterprise. NNSA’s resulting analysis showed that 100 percent of RTBF cost is fixed for certain capabilities, including high explosives and weapons assembly/disassembly facilities and infrastructure. In addition, the analysis showed that between 85 and 90 percent of cost was fixed for nonnuclear components and plutonium and uranium work. Many of these costs are included in the RTBF Operations of Facilities and Stockpile Services work scopes. While we were unable to independently verify NNSA’s analysis, during the course of our review we did observe the relatively fixed nature of the infrastructure and activities necessary to maintain base capabilities. For example, an NNSA official estimated that the base capability cost for pit manufacturing is about $120 million in Stockpile Services funds annually, in comparison with overall Plutonium Sustainment funding for fiscal year 2009 of $155.3 million. Plutonium Sustainment funding also included production-related R&D costs, as well as incremental costs for actual component manufacturing. In addition, officials from several sites highlighted equipment that may be operated for only a limited portion of each year but that still must be maintained and operated when needed. Officials at Y-12 noted that the fixed costs to maintain certain of these capabilities currently exceed the value of their output; however, to ensure that Stockpile Support and ST&E missions are achieved, these capabilities must be maintained. While base capability costs for the nuclear security enterprise are unlikely to significantly decline as a result of stockpile reductions, a primary purpose of NNSA’s effort to modernize the nuclear security enterprise is to reduce the overall level of fixed costs at and among sites by consolidating infrastructure and reducing capacity to base levels without compromising national security. According to an NNSA official, 10 years from now one- third of NNSA’s total existing facilities and infrastructure will be in excess of programmatic need. Furthermore, NNSA’s modernization plans call for consolidating experimental capabilities among sites within the complex and for reducing excess production capacity. We previously reported on efforts at several sites, including LANL and LLNL, to reduce or eliminate storage of significant quantities of weapons-grade special nuclear material in site facilities. We also recently reported on progress to replace KCP infrastructure with a new, modern facility that NNSA expects to result in significantly reduced operations and maintenance costs for that site. Other efforts include construction of the new Highly Enriched Uranium Materials Facility at Y-12, which will enable closure of several older storage facilities at the Y-12 site. In addition, facility disposition at multiple sites, including LANL, LLNL, NTS, Pantex, and Y-12, will reduce both ongoing maintenance costs and deferred maintenance backlogs. Consolidation of equipment at NTS will reduce maintenance. While base capability costs appear to be relatively insensitive to changes in the stockpile, complete and reliable information about the costs of these capabilities is necessary for sound program management and to help inform future planning. This is particularly important in the current political and budgetary environment, in which stockpile reductions are anticipated, and the Administration has planned to increase budget requests for Weapons Activities by $4.25 billion over the fiscal year 2010 enacted level between fiscal years 2011 and 2015. This planned budget increase is envisioned in part to ensure adequate support to maintain and improve base capabilities, including infrastructure recapitalization and replacement. In such an environment, NNSA is likely to face increased scrutiny of its planning, programming, and budget execution to determine the effect of funding increases on the overall health of base capabilities. In the past, Congress, we, and NNSA have examined different ways of generating information on the costs of the nuclear weapons program that would be useful to NNSA management and congressional decision makers for planning purposes. In 2000 we recommended that NNSA develop a method to relate its program structure to DOE’s cost accounting considerations so that fixed and variable costs of the program’s activities could be determined and made available when the program makes its annual budget submission. In fiscal year 2005, NNSA reorganized its budget structure in response to congressional appropriations committees, which instructed NNSA to begin budgeting by warhead and bomb type— another way to understand program costs. The current budget structure does identify some type-specific information. However, NNSA and site officials have continued to caution against allocating RTBF and Stockpile Services costs to specific warhead or bomb types, stating that allocating fixed costs does not really provide any additional information than is already available and could prove to be misleading; in the event of stockpile reductions, fixed costs would simply be reallocated across remaining warhead and bomb types and fail to produce the significant cost savings that might be anticipated. Statement of Federal Financial Accounting Standards (SFFAS) No. 4, Managerial Cost Accounting Standards, states a general standard for federal agencies to provide reliable and timely information on the full cost of federal programs. The principal purpose of SFFAS No. 4 is to determine the cost of delivering a program or output to allow an organization to assess the reasonableness of this cost or to establish a baseline for comparison. Congressional appropriations committees sought to define individual warhead and bomb types as NNSA’s programs; however, since 2005 NNSA has defined its programs as a mix of individual warhead and bomb types, production and R&D functions that support multiple warhead and bomb types, facilities and infrastructure support, and other supporting programs such as security. In part, NNSA has done so because DOE’s accounting guidance does not require NNSA to allocate basic R&D costs and certain infrastructure capacity costs. Also, by identifying RTBF and Stockpile Services as programs, NNSA has identified in its budget structure costs it has determined are fixed. Going forward, NNSA appears to be moving toward a budget structure aimed at ensuring sufficient funding to sustain base capabilities and to identify additional funding that may be necessary to modernize capabilities or to achieve a level of research or production capacity above the base level. NNSA currently lacks the total cost information about its existing programs to ensure it can accurately identify the costs of its base capabilities for future budget justifications. Through its ongoing effort to revise its Defense Programs work breakdown structure, which includes portions associated with RTBF Operations of Facilities and Stockpile Services, NNSA has the opportunity to capture this information. More specifically, NNSA’s preliminary revisions to its national work breakdown structure for RTBF Operations of Facilities reorients the work breakdown structure around capabilities and products; highlights Mission Critical facilities that support these capabilities; and identifies three types of costs to support these capabilities: (1) operations, which represents the current program; (2) risk reduction, which includes costs above base capability to support facility and equipment upgrades; and (3) transformation, which includes costs to replace facilities and infrastructure or otherwise significantly invest in their modernization. These revisions are positive developments that we believe will enable NNSA to improve its understanding of facilities and infrastructure costs paid for with congressionally directed RTBF Operations of Facilities funds and to improve the transparency of its RTBF Operations of Facilities budget justification. According to NNSA officials, once the revised work breakdown structure for all of Defense Programs has been fully implemented, it should allow NNSA to capture information on the total costs to operate and maintain weapons activities facilities and infrastructure, not just those costs paid for with congressionally directed funds for RTBF Operations of Facilities. In the absence of total cost information, according to a senior NNSA official, NNSA is challenged to balance operations and maintenance costs with recapitalization projects and with large facility replacement projects. According to NNSA officials, the portion of the revised Defense Programs work breakdown structure for Stockpile Services will include a reorientation around capabilities and products, where possible. While several NNSA officials said that improving cost estimating is not a primary impetus for revising the Stockpile Services work breakdown structure because all Stockpile Services costs are fixed, officials responsible for revising the Defense Programs work breakdown structure told us that doing so will help achieve transparent cost reporting from disparate contractor cost accounting systems, regardless of the fixed nature of these costs. Without identifying the total costs of Stockpile Services-supported products and capabilities, NNSA will be challenged to explain the effects of funding changes or justify the necessity for increased investment to support or enhance base capabilities. It is important to recognize that having a product- or capability-oriented work breakdown structure for Stockpile Services that includes associated support costs should not reduce NNSA’s or its M&O contractors’ flexibility to manage Stockpile Services activities by function. Within the global community, the Administration, and Congress, a bargain is being struck on nuclear weapons policy. Internationally, if the treaty is ratified, significant stockpile reductions have been negotiated between the United States and Russia. Domestically, a new Nuclear Posture Review has provided an updated policy framework for the nation’s nuclear deterrent. To enable this arms reduction agenda, the Administration is requesting from Congress billions of dollars in increased investment in the nuclear security enterprise to ensure that base scientific, technical, and engineering capabilities are sufficiently supported such that a smaller nuclear deterrent continues to be safe, secure, and reliable. For its part, NNSA must accurately identify these base capabilities and determine their costs in order to adequately justify future presidential budget requests and show the effects on its programs of potential budget increases. As it now stands, NNSA may not be accurately identifying the costs of base capabilities because (1) without guidance to M&O contractors for consistent reporting, NNSA cannot identify the total costs to operate and maintain essential weapons activities facilities and infrastructure, and (2) NNSA analyzes the reported costs of R&D and production functions without fully identifying these functions with the specific capabilities supported through Stockpile Services. Without taking action to identify these costs, NNSA risks being unable to identify the return on investment of planned budget increases on the health of its base capabilities or to identify opportunities for cost saving. NNSA has the opportunity to mitigate these risks by addressing them through the ongoing revision of work breakdown structures and through identifying means of collecting the total costs of its base capabilities from M&O contractors, which will not necessitate any changes to the way that Weapons Activities programs are budgeted or how funds are expended. Without taking these actions, NNSA will not have the management information it needs to better justify future budget requests by making its justifications more transparent. Additionally, the availability of this information will assist Congress with its oversight function. We recommend that the Administrator of NNSA take the following five actions. To allow Congress to better oversee management of the nuclear security enterprise and to improve NNSA’s management information with respect to the base capabilities necessary to ensure nuclear weapons are safe, secure, and reliable: (1) develop guidance for M&O contractors for the consistent collection of information on the total costs to operate and maintain weapons activities facilities and infrastructure; (2) require M&O contractors to report to NNSA annually on the total costs to operate and maintain weapons activities facilities and infrastructure at their sites; (3) evaluate the total costs of operating and maintaining existing weapons activities facilities and infrastructure as part of program planning processes and budget formulation, especially in relation to recapitalization and modernization of the nuclear security enterprise; and (4) once the Stockpile Services work breakdown structure reflects a product or capability basis, use this work breakdown structure to develop product/capability cost estimates that adequately justify the congressional budget request for Stockpile Services. In light of significant proposed increases to NNSA’s nuclear weapons program budget in fiscal year 2011 and beyond, we also recommend that the Administrator of NNSA: (5) include in future years’ congressional budget justifications (a) detailed justifications for how these proposed funding increases will affect program execution and (b) information about how the funding increases affected programs. We provided a draft of this report to NNSA for its review and comment. NNSA agreed with the report and its recommendations. NNSA’s comments on our draft report are presented in appendix IV. NNSA and several of its sites also provided technical comments, which we incorporated into the report as appropriate. In particular, we worked with NNSA officials to ensure the technical accuracy of the discussion of NNSA’s efforts to revise the Defense Programs national work breakdown structure. Because this effort is ongoing, we and NNSA wanted to ensure that information included in this report is as current and complete as possible. We are sending copies of this report to the appropriate congressional committees, Secretary of Energy, Administrator of NNSA, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. At the request of the Chairman and Ranking Member, Subcommittee on Strategic Forces, Committee on Armed Services, House of Representatives, we were asked to (1) determine the extent to which the National Nuclear Security Administration’s (NNSA) Readiness in Technical Base and Facilities (RTBF) Operations of Facilities congressional budget justification that supplements the Budget of the United States Government (i.e., the President’s Budget) for fiscal year 2009 is based on the total cost of operating and maintaining weapons facilities and infrastructure; (2) determine the extent to which NNSA’s fiscal year 2009 congressional budget justification for Stockpile Services identifies the total costs of providing foundational research and production support capabilities; and (3) discuss the implications, if any, of a smaller stockpile on RTBF Operations of Facilities and Stockpile Services costs. In conducting our review and to accomplish all of these objectives, we reviewed and analyzed relevant documents concerning NNSA’s weapons programs and activities, such as NNSA’s congressional budget justifications for fiscal years 2009, 2010, and 2011 and the fiscal year 2009 national work breakdown structures for RTBF Operations of Facilities and Stockpile Services (see apps. II and III). We analyzed NNSA’s work breakdown structures and compared them with GAO’s best practices, as published in the GAO Cost Estimating and Assessment Guide: Best Practices for Developing and Managing Capital Program Costs. To help assess the merits and requirements of indirect cost allocations to warhead and bomb types, we examined the Statement of Federal Financial Accounting Standards No. 4, promulgated by the Federal Accounting Standards Advisory Board, and Cost Accounting Standards, promulgated by the U.S. Cost Accounting Standards Board. In addition, we interviewed key officials from the Department of Energy’s Office of the Chief Financial Officer and Office of Engineering and Construction Management, and NNSA’s Office of Defense Programs, Office of Management and Administration, Office of Field Financial Management, and site offices. Furthermore, we collected and analyzed budget, cost, and program documents and interviewed key officials from all eight NNSA sites. We visited six of the eight sites, including Lawrence Livermore (LLNL), Los Alamos (LANL), and Sandia National Laboratories (SNL); Nevada Test Site (NTS); Pantex Plant (Pantex); and Y-12 National Security Complex (Y-12), where in total we toured more than 30 weapons activities facilities. All of these facilities were Mission Critical—directly employed to meet highest- level NNSA weapons program milestones. We selected these facilities based upon the following criteria: (1) their uniqueness within the nuclear security enterprise, (2) the importance of the capabilities provided by the facilities, and (3) the complexity of their operations. We went to these sites to understand their roles in weapons program activities and the nuclear weapons budget, and to see the facilities, equipment, and infrastructure within the nuclear security enterprise. To determine the extent to which NNSA’s RTBF Operations of Facilities congressional budget justification for fiscal year 2009 is based on the total cost of operating and maintaining weapons facilities and infrastructure, we also collected data from NNSA’s eight sites on their facilities and the sources of funding they use to fully support the operations and maintenance of weapons activities facilities and infrastructure. These data were collected through the use of a data collection instrument we developed and transmitted electronically to officials identified at all eight sites in the form of a Word Electronic Questionnaire. The data collection instrument was used to obtain RTBF program information and fiscal year 2009 expenditure data. The practical difficulties of employing any data collection instrument may introduce unwanted discrepancies. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the individual characteristics of the people who respond can introduce unwanted variability into the results. We included steps in both the data collection and data analysis stages to minimize such discrepancies. For example, we took the following steps: In developing this data collection instrument, we consulted with stakeholders within GAO and with NNSA officials to properly phrase our questions and to format the instrument; we also pretested the instrument with NNSA officials in the Office of Defense Programs and the NNSA Service Center, and with NTS, Pantex, and SNL management and operating (M&O) contractors, on a line-by-line basis to ensure the questions were clear, complete, and accurate, and made appropriate modifications and clarifications to increase data validity and reliability. Upon receiving responses from the sites to the data collection instrument, we analyzed data on costs, budget, work scope, direct funding sources, and indirect cost pools on a consistent basis for all sites; we followed up with sites as needed to ensure their responses were accurate and complete; and finally, we performed a reliability assessment of these data and determined they were sufficiently reliable for the purposes of our report. In addition, we reviewed NNSA documents such as the RTBF Operations of Facilities national work breakdown structure, the RTBF Mission Dependency Guidance, and sites’ documents such as their RTBF Site Execution Plans and RTBF Quarterly Reports, and interviewed NNSA and site officials. We also requested general information and general fiscal year 2009 funding information from sites on several specific weapons activities facilities to use as examples in this report. We worked with GAO methodologists to develop criteria for selecting the facility examples such as facilities at sites we visited, facilities at both laboratories and plants, facilities with diverse funding expenditures, and facilities conducting both R&D and production missions. To determine the extent to which NNSA’s fiscal year 2009 congressional budget justification for Stockpile Services identifies the total costs of providing foundational research and production support capabilities, we also examined and analyzed NNSA’s Stockpile Services national work breakdown and NNSA’s expenditure data for fiscal year 2009, observed neutron generator and plutonium pit manufacturing facilities supported with Stockpile Services funds, and interviewed NNSA and site officials. In addition, we requested information from NNSA on specific Stockpile Services activities to use as examples in our report. We selected activities based on their financial significance in the Stockpile Services work breakdown structure. To discuss the implications, if any, of a smaller stockpile on RTBF Operations of Facilities and Stockpile Services costs, we also reviewed documents such as NNSA’s Final Complex Transformation Supplemental Programmatic Environmental Impact Statement and NNSA’s Infrastructure and Modernization Report to obtain estimates of the nuclear security enterprise’s fixed costs. In addition, we interviewed NNSA and site officials. We conducted the work between April 2009 and June 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Congressional spending directives designate funds within NNSA’s Weapons Activities appropriation for the RTBF Operations of Facilities subprogram at each of NNSA’s eight sites. In addition, a small amount is also directed for Institutional Site Support, which is at NNSA’s discretion to prioritize for expenditure (see table 2). Table 3 provides NNSA’s RTBF work breakdown structure applicable to all sites for fiscal year 2009 and showing three levels of detail. Sites may create further levels of detail for their own management, budgeting, or cost collection. Alternatively, sites may use their own work breakdown structures that they ultimately cross-walk to NNSA’s work breakdown structure to report to NNSA program managers on how congressionally directed funds were expended. Each of the eight sites in the nuclear security enterprise has established its own cost accounting practices for how to account for the activities necessary to operate and maintain weapons activities facilities and infrastructure. While individual M&O contractors may be Cost Accounting Standards (CAS) compliant, differences in their cost accounting practices preclude NNSA from being able to identify the total costs to operate and maintain the facilities and infrastructure essential to achieving Stockpile Support and science, technology, and engineering (ST&E) program missions. These differences include determining (1) which weapons activities facilities and infrastructure individual sites support with RTBF Operations of Facilities funds, (2) which activities included in the RTBF Operations of Facilities work breakdown structure each site supports directly or indirectly, and (3) the additional funding sources sites use to support certain activities included in the RTBF Operations of Facilities work breakdown structure. Consistent with congressional funding direction, each site has discretion to determine which of its facilities and infrastructure will be supported with RTBF Operations of Facilities funds. While NNSA has identified the mission essential facilities and infrastructure at each of its sites, NNSA does not require M&O contractors to pay for essential facilities and infrastructure with RTBF Operations of Facilities funds. For example, LLNL officials told us their top priority for RTBF Operations of Facilities funds is fully supporting safe and secure nuclear facilities operations. In fiscal year 2009, only KCP fully funded all of its essential weapons activities facilities with RTBF Operations of Facilities funds. Table 4 shows the extent to which weapons activities facilities and infrastructure were fully, partially, or not supported with RTBF Operations of Facilities funds in fiscal year 2009 across the nuclear security enterprise. While NNSA can identify the activities its contractors classify as direct to the RTBF Operations of Facilities program, NNSA cannot easily identify those activities its contractors classify as indirect but that also are included in the RTBF Operations of Facilities work breakdown structure. Six of the eight sites in the nuclear security enterprise reported to us that in fiscal year 2009 they allocated certain activities included in the RTBF Operations of Facilities work scope into indirect cost pools. These indirect cost pools are often funded through multiple funding sources. For example, NNSA includes utilities and general services, such as electric power and steam supplied to weapons activities facilities, as an activity in its RTBF Operations of Facilities work breakdown structure, but two sites—LLNL and SNL—did not consider utilities costs to be direct to the RTBF Operations of Facilities program in fiscal year 2009. The RTBF Operations of Facilities work breakdown structure includes real property maintenance—maintenance for facilities, facility equipment, and programmatic equipment—when that real property supports multiple, not individual, weapon programs. SNL officials told us that their direct costs to the RTBF Operations of Facilities program for real property maintenance include only the programmatic equipment that provides mission capabilities inside weapons activities facilities. Real property maintenance costs for facilities or facility equipment are indirect. In contrast, LLNL officials told us that real property maintenance costs for programmatic equipment, facility equipment, and facilities themselves may be direct costs to the RTBF Operations of Facilities program, depending on the facility and the nature of the equipment. equipment and training staff to operate this equipment, in its RTBF Operations of Facilities work breakdown structure, but there were significant differences across the nuclear security enterprise in how SPEC costs were actually funded during fiscal year 2009. Three sites— KCP, Pantex, and NTS—funded all SPEC costs directly with RTBF Operations of Facilities funds. Another three sites—LLNL, LANL, and SNL—classified SPEC costs as direct and partially paid for these costs with RTBF Operations of Facilities funds. Y-12 did not fund SPEC with RTBF Operations of Facilities funds at all, while SRS reported that it did not spend any money on SPEC activities in fiscal year 2009. Finally, all sites used funding in addition to RTBF Operations of Facilities funds to pay for activities included in the RTBF Operations of Facilities work scope in fiscal year 2009. Consistent with CAS, M&O contractors are allowed to use these additional funding sources as long as their cost accounting practices are disclosed; their costing practices for supporting facilities and infrastructure are consistently applied; the programs supporting facilities and infrastructure benefit from their use; and their practices otherwise comply with applicable cost principles, CAS, and the M&O contract. These additional sources of funding included (1) other Weapons Activities programs that in some instances are congressionally mandated, and (2) programs outside of Weapons Activities, including Defense Nuclear Nonproliferation, Department of Energy (DOE), and other federal agencies. NNSA officials cannot easily identify all of the costs associated with RTBF Operations of Facilities work scope paid for through these other funding sources. In response to our data collection instrument, site officials identified 11 sources of funding congressionally directed for other Weapons Activities programs and subprograms that they expended, in part, on activities they considered to be included in NNSA’s RTBF Operations of Facilities work breakdown structure. For example, As congressionally directed, LLNL expended funds designated for the Inertial Confinement Fusion and High Yield and the Advanced Simulation and Computing Campaigns to support RTBF Operations of Facilities activities for facilities and infrastructure associated with these programs. LANL expended congressionally directed funds for the Directed Stockpile Work program to support activities included in the RTBF Operations of Facilities work breakdown structure—including some facilities management and support, real property maintenance, and SPEC costs. SRS expended funds congressionally directed for the Tritium Readiness Campaign to support all the activities included in the RTBF Operations of Facilities work breakdown structure at its Tritium Extraction Facility. Y-12 expended funds congressionally directed for the Facilities Infrastructure Recapitalization Program (FIRP) to support RTBF Operations of Facilities activities covering real property maintenance, excess facilities management and disposition, and construction projects. NNSA Defene Progr mission supported: High exploive rerch, development, nd teting for the Science nd Engineering Cmpign, Advnced Sim- tion nd Compting (compter modeling), nd Directed Stockpile Work (detontor surveillnce) facility equipment in its RTBF Operations of Facilities work breakdown structure. However, most M&O sites only partially paid for capital equipment costs in their weapons activities facilities with RTBF Operations of Facilities funds in fiscal year 2009. Officials from multiple sites, including Pantex and SNL, told us that some capital equipment costs that could be paid for with RTBF Operations of Facilities funds can also be paid for with funds directed for other Weapons Activities programs, such as Stockpile Services, that use the equipment. The exception is KCP, which fully funded its capital equipment costs in fiscal year 2009 with RTBF Operations of Facilities funds. million in user fees to Weapons Activities programs—such as the pit manufacturing program, and the Science and Engineering Campaigns—as well as to other work sponsors that used space inside the laboratory’s plutonium facility. Congressional spending directives designate funds within NNSA’s Weapons Activities appropriation for the Stockpile Services subprogram. Within the subprogram, NNSA obligates funds to its eight sites for expenditure. In fiscal year 2009, NNSA obligated $866.4 million to its sites to execute Stockpile Services work scope (see fig. 2). According to our cost guide, a work breakdown structure is the cornerstone of every program because it defines in detail the work necessary to accomplish a program’s objectives and promotes accountability by identifying work products that are independent of one another. This provides a basis for identifying resources and tasks for developing a program cost estimate. The ability to generate reliable cost estimates is a critical function, and a program’s cost estimate is often used to establish budgets. NNSA’s sites may create further levels of detail within the work breakdown structure for their own management, budgeting, or cost collection. Our cost guide is a compilation of cost estimating best practices from across industry and government. Among other things, these best practices discuss establishing a product-oriented work breakdown structure, which allows a program to track cost and schedule by defined deliverables. This allows a program manager to more precisely identify which components are causing cost or schedule overruns and to more effectively mitigate the root causes of overruns. For NNSA, a product may best be thought of more broadly as a capability, since a significant portion of NNSA’s mission is research and development (R&D). Thus, a product-oriented work breakdown structure for NNSA could be focused on the capability to execute a class of experiments, to produce a weapon component, or to conduct specified R&D. Our cost guide emphasizes that a product-oriented work breakdown structure should contain program management and other overhead activities to make sure all work activities are included. In contrast, a functionally based work breakdown structure—for example, one based on manufacturing, engineering, or quality control—would not have the detailed information to reflect cost, schedule, and technical performance on specific deliverables. Table 5 provides NNSA’s work breakdown structure applicable to all sites for fiscal year 2009 and showing four levels of detail. In fil yer 2009, $59.6 million in Prodc- tion Support fndas pent to provide tooling nd tooling ervice ite where prodction work occ. Tooling provide prodction fcilitie with the tool, prt nd ccessorie, mchinery, eqipment, nd labor needed for prodction nd to mintin prodction eqipment. Thi work o involve preption of pecifiction nd deign for tooling nd tet eqipment. The illustrtion elow how uum clition tem— piece of pecilized eqipment used to clite/certify vuum gge—for which tooling fnd support corrective nd preventive mintennce. production mission, whatever that mission may be. More specifically, these support activities—such as engineering and manufacturing operations; quality supervision and control; tool, gage, and equipment services (tooling); purchasing, shipping, and materials management; and electronic information systems—enable the production of weapons components and weapon assembly/disassembly, and help support surveillance testing. To this end, NNSA officials characterized Production Support as paying directly for the indirect activities at individual sites that (1) are associated with providing manufacturing support for production processes and (2) support more than one warhead or bomb type. Management, Technology, and Production (MTP). MTP is the second largest activity group within Stockpile Services. According to NNSA, MTP includes activities that (1) sustain and improve stockpile management, (2) develop and deliver weapon use control technologies, and (3) result in production of weapons components for use in multiple warhead and bomb types. In contrast to Production Support activities that are focused on individual sites’ production missions, MTP includes those activities that benefit the nuclear security enterprise as a whole. NNSA officials characterized MTP as supporting a mix of direct and indirect activities. More specifically, among other things, MTP management funds support weapons test data archiving and other shared data systems; MTP technology funds support studies and assessments relating to the safety and security of nuclear weapons; and MTP production funds support the interpretation of the results from surveillance tests, which are used to monitor and evaluate the condition, safety, and reliability of weapons in the stockpile. In addition, certain activities are captured within MTP that would be classified by NNSA if associated with a specific warhead or bomb type. According to NNSA officials, costs for these activities represent a relatively small amount of MTP, which one official estimated at approximately 10 percent of surveillance costs, or about $4.6 million in fiscal year 2009. In fil yer 2009, NNSA nd it M&O contrctor pent $9. million in R&D Certifiction nd Safety fnd supporting the base cability to condct hydrodynmic nd subcriticl te. Thee experiment improve ndernding of weponteri. Hydrodynmic te assss the performnce nd reliability of wepon using high exploive to detonte mock wepon tht contin surrogte rther thn fissile mteri, to lyze the repone of the djcent teri in the wepon. Subcriticl te use exploive to assss the propertie of pltoninder high pressure thtop hort of cler detontion. The illustrtion elow how the Cygnusual-m rdiogrphic fcility, which provide X-ry imging of subcriticl te. Cygnus locted in the NTS’s U1 Tnnel Complex, pproximtely 1,000 feet ndergrond. R&D Certification and Safety. R&D Certification and Safety provides the underlying capabilities to mature basic research conducted in ST&E programs. In this sense, R&D Certification and Safety serves as a technology development bridge between research and weaponized technologies. Among other things, R&D Certification and Safety funds support three major activities. First, funds are used to support design work to develop certain limited life weapon components that are used in multiple warhead and bomb types and that must be exchanged on a regular basis because they expire. Second, funds support the specialized facilities, equipment, and personnel to maintain a base capability to perform hydrodynamic tests, which examine the performance of nuclear weapons pits using surrogate materials to replace fissionable materials, and subcritical experiments, which examine the material properties of plutonium. Finally, funds support the preparation of various types of studies, including those produced annually to report to the President of the United States on the safety, security, and reliability of the stockpile. R&D Support. R&D Support is the smallest of the functional work activity groups in Stockpile Services. R&D Support consists largely of indirect activities that provide administrative and infrastructure support for sites’ R&D missions. These activities include program management for and coordination of Stockpile Services’ many different outputs, R&D quality control, computing hardware for personnel, and financial database maintenance. Plutonium Sustainment. Plutonium Sustainment is the only fully product-oriented activity group in Stockpile Services. While incorporated as an activity group within Stockpile Services, Plutonium Sustainment has its own work breakdown structure that is independent from the other four Stockpile Services activity groups. The Plutonium Sustainment work breakdown structure includes production support, R&D support, and program management activities. According to an NNSA official, this work breakdown structure, which captures work activities associated with pit manufacturing and related R&D—as well as associated indirect and overhead costs—is largely a legacy from when Plutonium Sustainment was an ST&E program instead of part of Stockpile Services. This is markedly different from the other four groups, where production or R&D activities are organized separately from their supporting overhead activities. The same NNSA officials said that nearly all Plutonium Sustainment funds are spent at LANL, which is home to the nation’s pit manufacturing capability. These funds not only support the base capabilities for plutonium R&D and pit manufacturing, but also contribute to the operation and maintenance of the facilities and infrastructure necessary to conduct these activities as well as the actual manufacturing of a limited number of pits each year. In addition to the contact named above, the following staff members made key contributions to this report: Jonathan Gill, Assistant Director; John Bauckman; Allison Bawden; Muriel Brown; Abe Dymond; Eugene Gray; Carol Henn; Alison O’Neill; Timothy Persons; Cheryl Peterson; Rebecca Shea; Vasiliki Theodoropoulos; Jack Warner; and Franklyn Yao.
The National Nuclear Security Administration (NNSA) manages and secures the nation's nuclear weapons stockpile, with annual appropriations of about $6.4 billion. NNSA oversees eight contractor-operated sites that execute its programs. Two programs make up almost one-third of this budget: Readiness in Technical Base and Facilities (RTBF) Operations of Facilities, which operates and maintains weapons facilities and infrastructure, and Stockpile Services, which provides research and development (R&D) and production capabilities. Consistent with cost accounting standards, each site has established practices to account for these activities. The Administration has recently committed to stockpile reductions. GAO was asked to determine the extent to which NNSA's budget justifications for (1) RTBF Operations of Facilities and (2) Stockpile Services are based on the total costs of providing these capabilities. GAO was also asked to discuss the implications, if any, of a smaller stockpile on these costs. To carry out its work, GAO analyzed NNSA's and its contractors' data using a data collection instrument; reviewed policies, plans, and budgets; and interviewed officials. NNSA cannot accurately identify the total costs to operate and maintain weapons facilities and infrastructure because of differences in sites' cost accounting practices. These differences are allowable under current NNSA guidance as long as sites comply with cost accounting standards and disclose their practices to NNSA. The differences among cost accounting practices include the facilities and activities sites support with RTBF Operations of Facilities funds and how sites use other funding sources to support weapons facilities and infrastructure. GAO's analysis of sites' responses to a data collection instrument showed that the total cost to operate and maintain weapons facilities and infrastructure likely significantly exceeds the budget request for the RTBF Operations of Facilities program submitted to Congress for fiscal year 2009. NNSA has an effort under way that, if fully implemented, would provide more detail on the total costs to operate and maintain weapons facilities and infrastructure. NNSA does not fully identify or estimate the total costs of the products and capabilities supported through Stockpile Services R&D and production activities. Instead, NNSA primarily identifies the functional activities--such as engineering operations, quality control, and program management--and their costs supported through Stockpile Services and bases its future-year budget requests on the extent to which prior-year budgets were sufficient to execute these functions. In 2009, GAO issued a cost guide that identified using a product-oriented management tool, rather than a functionally oriented one, as a best practice for cost estimating. Using cost guide criteria, GAO's analysis found tracking costs by functions provides little information on the costs of the individual capabilities supported through Stockpile Services. NNSA has an effort under way that, if fully implemented, would provide more detail on the total costs of the products and capabilities supported through Stockpile Services. Reducing stockpile size is unlikely to significantly affect NNSA's RTBF Operations of Facilities and Stockpile Services costs because a sizable portion of these costs is fixed to maintain base nuclear weapons capabilities. The Administration has planned to increase budget requests for NNSA's nuclear weapons program by $4.25 billion between fiscal years 2011 and 2015. This planned increase is intended, in part, to invest in and modernize facilities and infrastructure and to ensure that base capabilities are supported such that a smaller nuclear deterrent continues to be safe, secure, and reliable. While base capability costs appear to be relatively insensitive to reductions in the stockpile, without complete and reliable information about these costs, NNSA lacks information that could help justify planned budget increases or target cost savings opportunities.
As we noted in a past report, the NMTC was created in an effort to increase the amount of capital available to low-income communities, facilitate economic development in these communities, and encourage investment in high-risk areas. In order to achieve these goals, the program allows investors that provide eligible capital to low-income communities and businesses to reduce their tax liability by 39 percent of the amount of the investment over a 7-year period. The process of making an NMTC investment involves several steps and a number of stakeholders. Before applying for an NMTC allocation, the applicant must apply for and be certified as a CDE, which is an entity that manages investments for community development. Once an organization has been certified as a CDE by the CDFI Fund, it is then eligible to apply for an NMTC allocation. Both for-profit and nonprofit CDEs may apply for and receive NMTC allocations (once a CDE is awarded with an allocation, it is often referred to as an allocatee). However, only a for-profit CDE can offer NMTCs to investors. Therefore, when a nonprofit CDE receives an NMTC allocation, it must transfer the allocation to one or more for-profit subsidiary CDEs (referred to as suballocatees). NMTC applicants submit standardized application packages in which they respond to a series of questions about their track records, the amounts of NMTC allocation authority being requested, and their plans for using the tax credit authority. The CDFI Fund staff and a group of external reviewers who have experience in business, real estate, and community development finance then review the applications and score them based on the following four areas: (1) community impact, (2) business strategy, (3) capitalization strategy, and (4) management capacity. The applicants can receive a score of up to 25 points in each of the areas, and CDEs can obtain up to 10 additional “priority points” for demonstrating that they have track records of successfully investing in low-income communities and/or that they intend to invest in unrelated entities. After being reviewed and scored by three different reviewers (and, in some cases, a fourth reviewer if a scoring anomaly exists), the applicants are ranked and NMTC allocation awards are made in descending order of the highest aggregate scores to applicants that met minimum thresholds in each of the four areas. The CDFI Fund makes award determinations in this order until the allocation authority is exhausted. The CDFI Fund also provides a written debriefing to each CDE that does not receive an allocation in order to provide them with reasons their application did not receive an NMTC award and to provide the CDE with suggestions on how to be more competitive for NMTC awards when applying in future rounds. As figure 1 shows, after the allocations are made to the CDEs, investors make equity investments, by acquiring stock or a capital interest, in the CDEs to receive the right to claim tax credits on a portion of their investment. In turn, the CDE must invest “substantially all” of the proceeds into qualified low-income community investments (QLICI). Eligible investments include, but are not limited to, loans to or investments in businesses to be used for developing residential, commercial, industrial, and retail real estate projects; and purchasing loans from other CDEs. Once a qualifying investment has been made in a CDE and the CDE has invested the funds in an eligible low-income community, the investor can claim the tax credit over the course of 7 years. In addition, equity investors may receive returns on their investments in the form of dividends or other income that they receive from the CDE during the period in which they are eligible to claim the credit. The NMTC investor is still usually allowed to claim the NMTC for the full 7-year period even if the business that the CDE provides investment to defaults on its loans or files for bankruptcy. However, in the case of a business that receives NMTC funds going bankrupt, the ability of the investor to recover its initial equity investment in a CDE would depend on the assets and financial condition of the CDE as well as the original agreement that the CDE entered into with the investor. The NMTC is a nonrefundable tax credit, meaning that taxpayers do not receive payments for tax credits that exceed their total tax liability. In addition, taxpayers that are eligible to claim the tax credit may sell their investment, along with the right to claim any remaining tax credits, to another investor after the initial NMTC investment. For example, an investor may make an equity investment in a CDE that would allow it to claim the credit and then sell its equity share in the CDE to another investor, thereby transferring the right to claim the remaining credits to this investor. The original investor may choose to sell its equity share in a CDE, and consequently its right to claim the credit, because it does not have a tax liability for that year or other reasons, such as the timing of the original investment. Once investors begin claiming the credit on their tax returns, three things can trigger a recapture event (meaning that the investor will no longer be able to claim the credit because the investment no longer qualifies for NMTCs). The NMTCs can be subject to a recapture if the CDE (1) ceases to be certified as a CDE, (2) does not satisfy the “substantially all” requirement, or (3) redeems the investment. In general, a recapture event means that the investors that originally purchased the equity investment and subsequent holders of the investment are required to increase their income tax liability by the credits previously claimed plus interest for each resulting underpayment of tax. Two recent legislative changes have increased the number of areas where NMTC investments can be made. First, the American Jobs Creation Act of 2004 added “targeted populations” to the eligibility criteria for NMTC investments. Second, Congress also expanded the NMTC program in 2005, providing an additional $1 billion of allocation authority to be made available to CDEs with a significant mission of recovery and redevelopment of low-income communities in the Gulf Opportunity Zone (GO Zone), which are specified areas in Louisiana, Mississippi, and Alabama that were affected by Hurricane Katrina during 2005. In general, targeted populations were introduced to give CDEs flexibility in making investments serving individuals and groups that reside or work in communities that might not otherwise fall under the NMTC program’s geographically based definition of a low-income community. Currently, regulations defining targeted populations have not been finalized. However, the CDFI Fund and IRS have provided guidance for what qualifies as a targeted population. These guidelines specify that the targeted populations, which are individuals or an identifiable group of individuals, must meet tests to qualify as low-income communities and the businesses or entities receiving the investments must also meet certain criteria. In IRS’s recently provided guidance, the definition of GO Zone targeted populations is similar to the definition for low-income targeted populations with some differences. In cases where a business is located within the GO Zone, it does not mean that it automatically qualifies for NMTC investment dollars. First, the GO Zone targeted population need not qualify as low-income individuals as defined above, but rather the population must consist of individuals who lack access to loans or equity investments because they were displaced from their principal residence or lost their principal source of employment because of Hurricane Katrina. Second, the NMTC investment must serve targeted populations in census tracts within the GO Zone that meet certain requirements, including that they contain one or more areas designated by the Federal Emergency Management Agency (FEMA) as flooded or having sustained extensive or catastrophic damage as a result of Hurricane Katrina. Figure 2 illustrates the effect that recent legislative changes have had on the census tracts that are eligible to receive NMTC investments. As the figure shows, geographically, a large portion of the country qualifies for NMTC investment, and there are eligible areas in every state. The figure also shows the area of the GO Zone where NMTC investments can be made in both eligible low-income communities and specified targeted populations as a result of additional allocation authority made available for areas affected by Hurricane Katrina. Congress initially provided a schedule for allocating annual NMTC authority to CDEs for calendar years 2001 through 2007. However, as we also reported in 2004, the CDFI Fund did not make any NMTC allocations to CDEs until 2003 because it needed to complete various start-up tasks for the new program, such as establishing the rules for using allocations. Because the initial allocations were not made until 2003, the CDFI Fund combined the allocation amounts available for 2001 and 2002 and awarded those NMTC allocations in 2003. The allocation amounts designated for 2003 and 2004 were then combined and awarded in 2004. Table 1 shows the current schedule for allocation rounds. Since 2004, allocation awards have been made to CDEs annually. As of January 2007, there have been four completed rounds of NMTC allocations, and the CDFI Fund is receiving applications for the 2007 round of NMTC allocation awards, which will be announced in September 2007. The 2007 allocation awards were originally scheduled to be the last authorized round of NMTC allocation awards. However, in December 2006, Congress passed and the President signed the Tax Relief and Health Care Act of 2006, which extends the NMTC for an additional year (through the end of 2008) with an additional $3.5 billion of NMTC allocation authority. Regulations are also required to be drafted to ensure that nonmetropolitan areas receive a proportional allocation of qualified equity investments. The CDFI Fund has completed four rounds of NMTC allocations, which CDEs are using to attract investment. The investment structures used to complete these deals have taken a variety of forms, including combining debt and equity in limited liability partnerships in order to invest in a CDE—called leveraging. In addition, the CDFI Fund has developed four main data collection systems to track efforts to implement and monitor the expanding NMTC program. Beginning in 2003, the CDFI Fund awarded NMTC allocations of varying amounts to a number of CDEs. The CDFI Fund has awarded 233 NMTC allocations to 179 different CDEs totaling $12.1 billion over the course of the four completed NMTC allocation rounds. As figure 3 shows, the CDFI Fund made awards to the largest number of CDEs in 2003, when the fund awarded NMTC allocations to 66 CDEs, and it made awards to the smallest number of CDEs in 2005 when 41 CDEs received allocations. In its most recent allocation round in 2006, the CDFI Fund made allocations to 63 CDEs for a total of $4.1 billion of tax credit authority. The largest award to a single CDE in this allocation round was $143 million, while the median award was $60 million. The CDEs receiving allocations were able to attract an increasing number of QEIs. As of December 2006, investors had made nearly 1,400 QEIs in CDEs, and as more allocation rounds have taken place, the number of QEIs has grown. Relatively few QEIs were made in 2003 when the program was in its early stages, but the number of QEIs increased significantly in both 2004 and 2005. This pattern of growth reflects increases in NMTC allocation authority and increased time for CDEs to establish business relationships with potential investors. In addition, more QEIs were made in CDEs that received allocations in 2003 and 2004 than in CDEs that received NMTC allocations in 2005. As of December 2006, 749 QEIs had been made in first round NMTC allocatees, 478 QEIs had been made in second round NMTC allocatees, and 154 QEIs had been made in third round allocatees. As figure 4 shows, the CDEs were generally able to attract increasing dollar amounts of qualified equity investment. QEI grew from about $140 million of investment in 2003 to over $2.2 billion of investment in 2005, and as of mid-December 2006, CDEs had recorded nearly $1.5 billion in NMTC investment for the year—totaling $5.3 billion over the period. CDEs are required to invest the remaining $6.8 billion of allocation authority awarded to this point during the coming years. At the same time, the size of the QEIs varied considerably across CDEs. According to CDFI Fund data, the largest QEI made through December 2006 was $113 million, while the median QEI during this period was about $1.8 million. The CDEs used this QEI to make investments in 583 qualified NMTC projects totaling $3.1 billion through fiscal year 2005. Nearly all of these investments have been to qualified active low-income community businesses (QALICBs) in qualifying areas. However, according to CDFI Fund data, a small number (about 1 percent) of the investments were made to other CDEs, as permitted under NMTC regulations. As more NMTC allocation awards are made and more NMTC investment transactions are completed, additional information will be available about the size and type of NMTC investments. Certain NMTC investment structures may have been a factor in the growth of the program by making NMTC investments more attractive. NMTC investors have used two primary investment structures when making QEIs in CDEs: (1) direct NMTC investment and (2) tiered NMTC investments. As of December 2006, about 54 percent of the $5.3 billion in NMTC investments were made using tiered investment structures. In a direct NMTC investment, an investor makes a QEI in a CDE that reinvests the money in a low-income community. (See fig. 5 for a description of these NMTC investment structures). In tiered investment structures, which include both equity investments and leveraged NMTC investments, investors provide equity or loans to a pass-through entity that combines funds from several sources, and the pass-through entity makes the QEI in a CDE. In both direct and tiered investment structures, equity investors in a CDE are able to claim the NMTC on their tax returns and, after leaving the equity investment in the CDE for the 7 years during which they are eligible to claim the credit, they can redeem their original equity stake in the CDE. In a tiered equity investment structure, the dollars invested in the investment fund consist entirely of equity investments from multiple investors. These investment structures accounted for about 13 percent of NMTC investment as of December 2006. In a tiered leveraged investment structure, a portion of the money being invested in the investment fund comes from equity investors and a portion of the money originates from a debt investment (loan). As of December 2006, about 41 percent of all NMTC investment was made using the leveraged approach. The leveraged investment structure may make NMTC investment more attractive to some investors because it allows investors to invest in the CDE who may not be able to claim tax credits but could still benefit from the economic returns. The investment structure can be used to separate the tax benefits of the investment from the economic benefits of the investment. For example, an investment fund partnership makes a $1 million leveraged qualified equity investment in a CDE where $400,000 of the money comes from the equity investors in the partnership and the other $600,000 comes from a bank as an interest-only loan to the investment partnership with a balloon payment after 7 years. The CDE that receives the QEI reinvests the money by loaning “substantially all” of the $1 million to a QALICB. In this structure, the economic and tax benefits are separated: the bank receives interest payments on the loan to the CDE and, after 7 years, the bank will also be entitled to collect principle payments on the loan while the equity investors are entitled to claim the NMTC for 7 years, totaling 39 percent of the total $1 million QEI—not just the $400,000 that was originally invested as equity. NMTC equity investors may also receive a return on their investment, in the form of dividends or partnership income, for example, during the 7-year period while they can claim the credit. However, neither the investment fund partnership nor the underlying investors can redeem any portion of the QEI during this period and still remain eligible to claim the credit. The leveraged investment structure may also offer a more attractive combination of risk and return than direct investment. From the bank’s perspective in the example above, this investment structure may be attractive because the loan-to-value ratio is more favorable than it would have been if the debt was not being combined with the investors’ equity. The more favorable ratio may compensate the bank for assuming a greater degree of risk, most notably if the business that receives the loan from the CDE defaults on its loan agreement. In that case, the bank’s investment is only secured by the equity in the original investment partnership ($400,000 in the example above). From the equity investor’s perspective, if the business defaults on its loan, they are still allowed to claim the full amount of the credit—as long as the business that receives the funds is a qualifying business in the year the loan is made. As the NMTC program has grown, investors have used more complicated investment structures, such as tiered investments. According to CDFI Fund data, 81 percent of investors making NMTC investments through December 2006 used tiered (including both equity and leveraged) NMTC investment structures, with investors in more recent years being more likely to use tiered structures. For example, 69.1 percent of investors making QEIs in 2003 and 2004 used tiered structures, while 87.5 percent of investors making QEIs in 2005 and 2006 used tiered structures. The CDFI Fund uses four data collection systems to administer and monitor the NMTC program. All of these data collection systems were operational before they were needed to collect data and to help the CDFI Fund monitor NMTC compliance. These data collection systems include (1) the Allocation Agreement System (AAS), (2) the Allocation Tracking System (ATS), (3) the Community Investment Impact System (CIIS), and (4) the New Markets Compliance Monitoring System (NCMS). Figure 6 illustrates how the AAS, ATS, and CIIS, combine to populate the NCMS, which the CDFI Fund uses to monitor CDEs’ compliance with their allocation agreements. A brief description of these data collection systems follows. The AAS contains information on the allocation agreements that CDEs enter into with the CDFI Fund. The AAS was operational as of August 2003 and is primarily used by the CDFI Fund’s legal staff to ensure that NMTC contracts are properly executed. The ATS is the primary system that the CDFI Fund uses to monitor QEIs that have been made and track CDEs (allocatees), suballocatees, and investors in the CDEs. The ATS contains information reported by the CDEs on the type of QEI that is made in the CDE, the amount of the investment, the CDE that received the investment, whether the CDE that initially received the allocation transferred the allocation to a suballocatee, and how much of the allocation was transferred. In addition, the ATS contains data reported by CDEs on the equity investors in the NMTC program. The ATS was operational as of November 2003. The CIIS collects information about CDEs and the investments that they make in low-income communities. CIIS data is collected through two reports: the Institution Level Report (ILR) and the Transaction Level Report (TLR). The ILR provides information on the CDEs, as well as their loan purchases and Financial Counseling and Other Services (FCOS) activities, and the TLR provides information the CDEs’ loans and investments in QALICBs and in other CDEs. The CIIS began receiving data in May 2004. The NCMS combines data from the CIIS, ATS, AAS, and other CDFI Fund data collection systems and is used to monitor whether CDEs remain compliant with their allocation agreements. CDFI Fund officials said that the NCMS has been operational since April 2005 and that the system was in place in time to allow the CDFI Fund to monitor first round allocatees’ compliance with their respective allocation agreements. Banks and individuals constitute the majority of NMTC claimants when qualified equity investments are originally made. Taken together, banks and individuals accounted for 70 percent of NMTC claimants through 2006. Banks and other corporations that invested in the credit had relatively large net assets. Individuals who invested in the NMTC had, on average, higher incomes than other taxpayers. The CDEs applied for far more allocation dollars than were available, receiving only about 11 percent of $107 billion in allocation authority for which they applied. The CDEs made investments in low-income communities, most often in the form of term loans to businesses. The businesses that received these loans used them for a variety of purposes but chiefly to finance new commercial real estate construction and rehabilitation. The communities where the investment projects were located were dispersed across states, and about 90 percent of projects were located in areas designated as “areas of high distress” because of factors such as low median incomes and high unemployment rates, including businesses in highly distressed areas, such as federally designated Empowerment Zones and Enterprise Communities. Although the NMTC program has attracted a variety of types of investors, as table 2 indicates, banks and individuals make up the majority of investors, accounting for 70 percent of NMTC investors. Other corporate investors, such as real estate development firms and insurance companies, and still other types of investors, including estates and trusts, make up the remainder of investors in the CDEs. Banks and other regulated financial institutions also account for the majority of NMTC investment funds. Corporations and individuals that claim the tax credit differ from other taxpayers in several key ways. Corporations investing in the NMTC tend, on average, to have larger total assets. For example, the average total assets for corporations that made NMTC investments was $98.3 billion in tax year 2003, while the average total assets for all corporations was $9.9 million (the average total assets for banks, the most common type of corporate NMTC claimant, was close to $990 million in 2003). Similarly, individual NMTC investors had larger adjusted gross incomes than other individuals who filed tax returns in tax year 2003. The average adjusted gross income for individual NMTC investors was about $1.2 million, while the average income for all individual taxpayers was about $47,600. In response to our survey, NMTC investors indicated that they decided to participate in the NMTC program for a variety of reasons. As table 3 shows, our investor survey revealed that the majority of NMTC investors indicated that the ability to claim the tax credit (over 75 percent of investors) and obtain a return on their investment (82 percent of investors) played at least a moderately important role in their decision to make an NMTC investment. Investors also indicated that improving conditions in low-income communities (90 percent) and creating and retaining jobs (78 percent) were at least moderately important motivations. About 40 percent of investors also noted that the credit played an important role in helping them remain compliant with other government regulations. Over time the number of new investors in the CDEs that receive NMTC allocations has increased. For example, 19 percent of investors that made their first QEIs in 2003 were new investors. The CDFI Fund defines new investors as investors making their first investment in a particular CDE. The percentage of new investors increased with investment made in 2004 through 2006 to a high of 69 percent in 2006 (through mid-December). Most investors that have participated in the NMTC program have only made one qualified equity investment. However, CDFI Fund data indicate that it is not uncommon for NMTC investors to participate in more than one QEI. For example, as of December 2006, about 55 percent of NMTC investors have only participated in one QEI, while 33 percent of NMTC investors participated in from two to five QEIs and 12 percent of investors participated in five or more QEIs. As NMTC investment structures have become increasingly complex in recent years, the expected rate of return for NMTC investments decreased. NMTC investments made in 2003 had an average expected rate of return, which includes any return on the equity investment and the tax credit, of 8.2 percent while investments in later years had an average expected rate of return of only 6.8 percent. This decline could be a result of the greater perceived risk for investments made at the beginning of the program. According to CDFI Fund officials, as the program has developed and investors have gained a better understanding of the manner in which the credit can be used, investors’ perceived risk in making NMTC investments has likely declined. A factor contributing to the decline may be that as table 4 shows, NMTC investors reported that they have become more familiar with the operations and investment portfolios of the CDEs they invested in after making NMTC investments. However, even though the reported expected rate of return on NMTC investments has fallen, investors indicate that they remain concerned about the market risk of NMTC investments and the possibility that businesses that receive NMTC investments could default on their loans. For example, our investor survey indicates that an estimated 86 percent (78.2, 92.0) of investors said that they were at least moderately concerned that their investment would not achieve its expected rate of return, and 81 percent (71.8, 87.9) of investors said that they were at least moderately concerned that the business that received their NMTC investment would default on its loan. For all allocation rounds combined, CDEs have applied for over $107 billion in NMTC allocation and received only about 11 percent of requested allocation dollars. As table 5 shows, the percentage of dollars awarded in relation to the dollars requested has remained fairly constant during the four allocation rounds, but in each round CDEs have applied for far more in NMTC allocations than the CDFI Fund has had the authority to award based on the NMTC’s authorizing legislation. The amount awarded as a percentage of the amount requested varied by at most 6 percentage points over the rounds. In general, CDEs applied for more in allocation authority in rounds where larger amounts were available for allocation. For all allocation rounds combined, the CDFI Fund received 1,078 NMTC applications from CDEs and only 223, or about 22 percent, received allocations. As table 6 shows, between 19 percent and 25 percent of CDEs that applied for allocations received them in each round. CDFI Fund officials indicated that NMTC applications will score particularly well to the extent that, among other things, the applicants commit to: (1) providing products with particularly flexible or nontraditional rates and terms; (2) serving severely economically distressed communities, including communities that have been targeted for redevelopment by other governmental programs; and (3) investing more than the minimally required 85 percent of NMTC proceeds into low- income communities. We observed the application reviewer training session in 2005 and noted that the CDFI Fund encouraged application reviewers to pay particular attention to types of projects and financing terms being proposed in the applications. One example we noted was that CDFI Fund officials instructed NMTC application reviewers to base a portion of each application’s overall score on the commitment of the applicant to serve highly economically distressed areas. CDEs that received NMTC allocations have used their allocations to make investments totaling $3.1 billion through fiscal year 2005, primarily in the form of loans to businesses in low-income communities. According to CDFI Fund data, these loans are used chiefly for constructing and rehabilitating commercial real estate and are also used to purchase fixed assets for businesses and to provide working capital for businesses. For example, these loans have been used to finance a range of activities, such as the rehabilitation of historic buildings and the operation of mixed-use real estate development. Other uses include the construction or operation of cultural arts centers, frozen pizza manufacturing, and the construction of charter schools. As figure 7 shows, about 75 percent of the dollar value of these loans and investments was used for investment in commercial real estate. According to data reported by CDEs to the CDFI Fund, most investment (88 percent) made by the CDEs in businesses comes in the form of term loans. According to CDFI Fund data, the most common types of loans being made to qualifying business with better rates and terms come in the form of loans with below market interest rates (80 percent of reported NMTC dollars) and lower-than-standard loan origination fees (56 percent of reported NMTC dollars). As figure 8 illustrates, other types of favorable financial packages that qualifying business take advantage of include things like interest-only loans, loans with longer-than-standard amortization periods, and higher loan-to-value ratios than are traditionally required. Through their allocation agreements with the CDFI Fund, all allocatees are required to use at least some portion of their allocation to serve designated “areas of higher distress,” which may have a greater need for economic development funds than areas that meet the NMTC program’s minimal requirements. For example, 51 percent of projects serve areas with a median income of less than 60 percent of area median income, and 47 percent of projects serve areas with unemployment rates at least 1.5 times the national average. In addition, over one-fourth of NMTC projects are located in federally designated Empowerment Zones and 51 percent of all NMTC projects are in Small Business Administration-designated Historically Underutilized Business Zones. NMTC projects are distributed across states. Activities reported through fiscal year 2005 included 583 projects, located in 45 states, the District of Columbia, and Puerto Rico. Table 7 shows the top 10 states organized by the total dollar amount of NMTC investment and the total number of projects. Appendix III contains the full list of the number of NMTC projects by state. The results of our investor survey and statistical analysis indicate that the NMTC may be increasing investment in eligible low-income communities by participating investors, which is consistent with the program’s purpose. Increased investment in low-income communities can occur when NMTC investors increase their total funds available for investment or when they shift funds from other uses. One limitation with our survey is that NMTC investors responding to our survey, because they benefit from claiming the credit, have an interest in ensuring that the NMTC program continues to operate. Our survey indicated that most NMTC investors increased the share of their investment budget for low-income communities because of the credit. However, in many cases the survey also indicated that the credit alone may not have been sufficient to justify the investment and meeting other government regulations may be an important incentive for making NMTC investments. In addition, about two-thirds of investors also indicate that NMTC investors have a track record of investing in low- income communities, which may mean that some investment was shifted from other low-income community investments. Our statistical analysis suggests that corporations investing in the NMTC are shifting investment funds while individuals who make NMTC investments may be increasing their overall level of investment. Neither our statistical analysis nor the results of our survey allow us to determine definitively whether shifted investment funds came from higher income communities or from other low-income community investments. A complete evaluation of the NMTC program’s effectiveness requires determining whether the program’s economic and social benefits to low- income communities offset its costs, which include costs such as forgone tax revenue and economic distortions evidenced by shifting investment funds. We did not conduct this complete evaluation for this report because sufficient data were not available. The CDFI Fund is currently working with a contractor to develop plans for a comprehensive program evaluation, which may include some aspects of program effectiveness. In response to our survey, most NMTC investors said that they would probably or definitely not have made the same investment with the same terms if they had not been eligible to claim the credit. An estimated 88 percent of investors said that they would not have made the same investment without the NMTC. Of these investors who would not have made the same investment without the NMTC, 75 percent (66.6, 82.7) also indicated that in the absence of the NMTC they would not have made a similar investment in the same community. Moreover, 64 percent (54.9, 72.5) of investors said that they increased the share of their investment budget that is designated for low-income communities because of the NMTC. Most NMTC investors have experience in low-income community investment. Nearly two-thirds of investors have additional investment in low-income communities that does not qualify for the NMTC. Sixty-one percent (53.2, 69.4) of respondents currently had additional investments in low-income communities that were not eligible QEIs, and 29 percent of investors had made one or more investments in other CDEs or similar organizations that mainly serve low-income communities but cannot be used to claim the NMTC. This interest in low-income community investment is also reflected in survey responses where 90 percent of investors said the goal of improving conditions in low-income communities influenced their decision to invest in the NMTC from a moderate to very great extent. Most investors also indicated that they plan to make additional NMTC investments. The survey responses indicate that in many cases, the credit alone may not have been sufficient to justify the investment. The NMTC can also be packaged with a number of other government incentives to make the investment more attractive. About half of respondents combine the NMTC with at least one other government incentive that can provide additional tax benefits to the investor. As figure 9 shows, state and local tax abatements are the most popular type of government incentive used. Some respondents that packaged the NMTC with other government incentives indicated that their ability to package the credit played an important role in their decision to make the investments, which may indicate that in some cases, the NMTC, in and of itself, is not a strong enough incentive to encourage investment in low-income communities. Meeting other government regulations may also be an important incentive for making NMTC investments. Over 40 percent of the investors reported that they use the NMTC to remain compliant with the Community Reinvestment Act (CRA), which rates depository institutions on their record of helping to meet the credit needs of their entire community. Seventy-one percent (58.3, 80.8) of investors that are required to comply with the CRA use their NMTC investment to help meet their CRA obligations. For investors using the NMTC to meet CRA requirements, 94 percent (83.4, 98.8) view it as very or somewhat important in their decision to make the investment. Nearly half of NMTC investors also reported that they make investments eligible for the Low-Income Housing Tax Credit, a tax credit for investment in rental housing targeted to lower income households. However, less than one-half of the investors that also invest in the Low- Income Housing Tax Credit view it as an alternative to the NMTC. One explanation for this is that these investors may be making other low- income community investments as a means for complying with government requirements such as the CRA. For example, of the survey respondents that participated in both the NMTC and the Low-Income Housing Tax Credit, nearly three quarters of these investors are also required to comply with the CRA. Our statistical analysis of corporations and individuals that claimed the NMTC indicates that some NMTC investment may be shifted from other uses and some investment could be new investment. Statistical analysis of corporations that claimed the NMTC indicates that, in general, NMTC investment funds are not new investment made from an increase in total funds available. When combined with information from the survey, this statistical result may indicate that corporations are shifting NMTC investment funds from other uses. Statistical analysis of individuals who invested in the NMTC indicates that in the aggregate, NMTC investment funds represent, at least in part, an overall increase in investment levels. Because corporate NMTC investment accounts for the majority of QEIs, the increased investment associated with participation in the program is likely to come primarily from funds shifted from other uses. Statistical analysis of corporations that claimed the NMTC indicates that NMTC investment funds are not likely to represent new overall investment. To assess whether NMTC investments represent new funds, we compared the growth rate in net assets of corporations that made NMTC investments to the growth in net assets of a similar group of corporations that did not make NMTC investments over time. We selected our comparison group using a stratified random sample of taxpayers based on total assets at the end of the tax period. We drew the comparison groups based on 2000 tax year data because this was the year before the credit could be claimed and in that year we would not expect any changes in behavior because of the credit. If NMTC investments represent new investment funds then we would expect the net assets of NMTC participants to grow faster over time than the net assets of corporations that did not make NMTC investments. Using multiple specifications, our results suggest that corporate claimants’ net assets are not growing faster than similar corporations that did not make NMTC investments. Rather than new investment, NMTC investment could represent a shift of investment by participating corporations from high- or moderate-income communities to low-income communities. This conclusion follows from combining evidence from the survey of investors with evidence from the statistical analysis. Because our analysis does not show a faster growth rate for NMTC investors, it is possible that the credit has no effect on investor behavior, but instead rewards investors for investment in low- income communities that would have been made in the absence of the credit. However, the effect of the credit may also be to shift investment from other low-income communities or from high- or moderate-income communities. Although it contains some contrary indicators about the effect of the credit, the survey of investors that benefit from claiming the credit, indicated that most investments would not have occurred in the absence of the credit and that NMTC investors had increased their investments in low-income communities because of the credit. Therefore, we infer that the most likely effect of the credit is that it shifts investment by participating investors into low-income communities from higher income communities. Further analysis of the components of net assets, total assets, and total liabilities, which are discussed in appendix II, produced inconclusive results regarding the source of the shifted funds. Shifted investment funds, in contrast to new investment funds, indicate that investors are decreasing investment in another asset or assets by some or all of the amount that they invest in the NMTC program. Investors might choose to shift funds for a variety of reasons, including a higher rate of return expected from the NMTC investment, a need to make an investment eligible for meeting CRA requirements, or the ability to establish new business relationships. Regardless of the reason, if funds are shifted as the result of a tax benefit, the shifting potentially creates other economic costs, including the opportunity cost of other uses of the funds, and benefits. These costs and any benefits that accrue to low-income communities should also be considered when evaluating the overall effectiveness of the tax credit in addition to the revenue costs of the program. When analyzing the effect of NMTC participation on the net assets of corporations, our results consistently showed no effect. Further, when we tested our results using different data specifications, we were still not able to detect an effect. However, our analysis of the NMTC’s effect on net assets for corporations had several limitations. For example, the amount of NMTC investment might be small enough relative to a corporation’s total size that our statistical models could fail to detect a positive effect of the NMTC investment on corporations’ asset levels. We attempted to mitigate this problem by basing our analysis on firm-level data, the smallest unit of analysis available, and growth in assets over time. In addition, we did not have data for total liabilities. We calculated a corporation’s total liabilities by subtracting stockholders’ equity and retained earnings from the “total liabilities and shareholders’ equity” line- item on the tax return. Additionally, our data made it difficult to identify which industry NMTC corporate investors participated in, another variable that would have helped strengthen our analysis. Similar analysis of individuals who invested in the NMTC indicates that at least some portion of their investment may represent an overall increase in investment (or “new” investment) rather than investment shifted from other uses. To assess whether NMTC investments represent new funds, we compared the wealth of individuals who made NMTC investments to the wealth of a similar group of individuals who did not make NMTC investments over time. If NMTC investments represent new investment funds then we would expect the wealth of NMTC claimants to grow faster over time than the wealth of nonclaimants. As table 8 shows, the NMTC is associated with a positive effect on the growth in NMTC investors’ wealth. This means that NMTC investors’ wealth is growing at a faster rate than similar investors who did not make NMTC investments. Thus, according to our analysis for individual NMTC claimants, the NMTC program investors appear to be increasing their investment in low-income communities because their QEIs represent investments that they would not have made otherwise and these investments are placed into low- income communities according to program rules. The increase in wealth for individuals can be broken down into its components, such as interest-bearing assets and business assets. The NMTC can have indirect effects on these components of wealth through its effect on after tax income. In addition to potentially producing ordinary returns on investment (such as dividend payments), part of the return on NMTC investments comes in the form of reduced tax liabilities. Because they are paying less in taxes, NMTC investors have more income available for investing in other types of assets and for consumption. As table 8 shows, our results are consistent with individuals placing at least a portion of this income into interest-bearing assets, such as savings accounts or certificates of deposit. As table 8 shows, these new NMTC assets also appear to take the form of business assets, including partnerships. Increases in business assets may be consistent with typical NMTC investment structures where many individuals are investing through pass- through entities. In our analysis, NMTC participation by individuals was associated with greater growth in wealth, and most variables measuring this association were highly statistically significant. In addition, various checks that we performed were consistent with the results we present above. However, as was also the case with our analysis of corporate investors, several data limitations exist for our analysis of individual investors. For instance, we did not have direct data on asset holdings. Consequently, we estimated wealth based on income streams reported on tax returns. In addition, some assets are particularly difficult to measure. Business assets are especially susceptible to measurement errors as income streams from these assets may vary widely from year to year. This means that assets not generating reportable returns, such as stock holdings that do not generate dividends in a particular year, do not appear in our estimates for that year. We have attempted to mitigate this problem by conducting a series of tests, such as using a 3-year average of wealth and asset variables, to confirm the consistency of our results. These tests and data limitations are discussed in more detail in appendix II. A complete evaluation of the program’s effectiveness goes beyond identifying whether the credit increases investment in low-income communities by participating investors and also requires determining both whether non-NMTC investors would have made the same investments that the NMTC investors made if the NMTC investors had not made the investment and whether the program’s benefits to low-income communities offset its costs, which include costs such as forgone tax revenue and potential economic inefficiencies created by shifting investment funds. Fully examining the effectiveness of the NMTC requires addressing at least two main issues: where do NMTC investment funds come from and do NMTC investments generate economic benefits in low- income communities? Because of data limitations, the relative youth of the NMTC program, and the inherent difficulties of measuring program costs and benefits, a full evaluation is beyond the scope of this report. However, our finding that the NMTC program causes claimants to shift their investment portfolios suggests that the program might generate some additional economic costs, such as the opportunity cost of redirecting investment resources from other, potentially valuable uses. Whether these economic costs are justified depends on the economic benefits that are generated in the low-income communities and the extent to which these benefits accrue to the targeted population. This highlights the importance of assessing the benefits of the program in eligible communities so that one can assess whether the costs are justified by the benefits of the program. The CDFI Fund has hired a contractor to design a comprehensive study to evaluate the NMTC program. The study design will be completed by mid- 2007, and the study will begin after the design is complete. During the design phase, the contractor will complete five case studies of NMTC investments. The study could potentially evaluate the effect that the NMTC is having on factors such as job creation and economic growth in areas that receive the credit. These issues fell outside the scope of this report. IRS monitors CDEs’ compliance with NMTC laws and regulations, and IRS is conducting a compliance study but is not yet selecting CDEs to audit in a manner that represents all types of CDEs. The CDFI Fund monitors CDEs’ compliance with their allocation agreements through its data collection systems and, on a more limited basis, by making site visits. The CDFI Fund has tested its data systems and developed policies and procedures for site visits. IRS and the CDFI Fund developed a memorandum of understanding (MOU) in an attempt to clarify the roles and responsibilities of both agencies in ensuring NMTC compliance, and IRS has access to CDFI Fund data. However, additional efforts could help IRS receive information in a more useful format. In addition to IRS and the CDFI Fund, investors and CDEs play a role in ensuring that CDEs remain compliant and the credit is not recaptured. IRS is responsible for ensuring that CDEs and NMTC investors adhere to NMTC laws and regulations. As part of its effort to monitor CDEs’ compliance, IRS is conducting a study to monitor CDEs’ compliance with NMTC legislative requirements, focusing on CDEs’ compliance with the “substantially all” requirement to invest at least 85 percent of their QEIs within 1 year of receiving the investment. IRS officials said that they chose to focus on CDEs’ compliance with the “substantially all” requirement because they believed that this was the area where noncompliance with NMTC provisions was most likely to occur. The current compliance study will provide IRS with some information about audited CDEs compliance with the “substantially all” requirement, including information about whether funds were invested in a timely manner and whether the investments were made to qualifying businesses. However, IRS did not select first round NMTC allocatees to audit in a manner that likely represents the full range of CDEs. IRS envisioned that its compliance study would focus on verifying that CDEs were in compliance with statutory requirements through examining CDEs’ tax returns and auditing CDEs. IRS has taken steps to develop and implement the compliance study, such as training auditors to conduct NMTC examinations and developing a training manual that provides examiners with background on the NMTC program, key issues to consider when reviewing whether CDEs meet the “substantially all” requirement, and information to familiarize auditors with the investment structures that NMTC investors use to make investments. IRS is currently auditing 20 of the 66 first round allocatees. IRS officials said that they initially planned to conduct examinations of early round CDEs using a sample of CDE tax returns that would yield a valid 95 percent level of confidence for the study’s results. IRS expected that all CDEs that received early round allocations would file income tax returns within a year or two of the award date, and that shortly after all the CDEs’ tax returns were filed, IRS would have enough returns to select a valid sample that would yield the desired confidence level. However, IRS changed its selection process because it took more time than expected for CDEs to file tax returns, and the volume of returns filed was not sufficient for IRS to draw a valid sample in a timely manner. IRS officials said that the delay for most CDEs occurred because of the lapse of time between the date that the CDE executed agreements with the CDFI Fund and when the CDE actually collected equity investments and began operations. As a result of the delay in acquiring tax returns for its study, IRS modified its overall compliance strategy in two ways. First, it decided to verify that each allocatee filed a tax return as a way to monitor CDEs’ filing compliance. IRS intends to continue to monitor CDEs’ filing compliance until they are confident that the entities will file as required. Second, IRS discontinued the sample approach and decided to manually review every return that it could identify. IRS initially requested over 80 tax returns from tax years 2003 and 2004. Of the returns that IRS had received by June 2006, it chose to facilitate audits of CDEs that filed 2004 tax year returns that had some indication of NMTC activity. According to IRS, because of the delays in when CDEs were awarded NMTC allocations and the time in which they began filing tax returns, IRS did not develop specific criteria for deciding which CDEs to audit. An IRS official said that IRS wanted to start its compliance study as soon as possible and the filing time lags created delays. IRS indicated that IRS will continue with this selection process until it reaches a point where there are sufficient returns placed in the examination stream to produce meaningful results. IRS plans to use the results of the compliance study, which will take several years to complete, to guide its future enforcement efforts. While IRS’s current compliance study will provide the agency with information about CDEs’ compliance with NMTC laws and regulations, the compliance study will have limited value if the audit selection process does not represent the full range of transactions. We have previously reported that taxpayer compliance studies should be representative of the population for which compliance is being measured and reasonably designed for developing compliance measures for the taxpayer population as a whole and for subgroups of taxpayers (such as suballocatees in the case of the NMTC program). IRS’s current plan for its compliance study could be improved to adhere to these standards more closely. Given IRS’s intent to rely on the study to guide enforcement efforts, the results of not having a study representative of the population could be lost tax revenue and increased cost through inefficient use of resources. IRS could change its strategy to make its results more useful as its compliance work progresses. IRS plans to audit 15 to 25 CDEs from each allocation round until it feels that compliance levels warrant a reduced number of audits. While it may be too resource intensive to conduct a statistically valid study with fully generalizable results, IRS could work with the CDFI Fund to develop criteria for determining which CDEs to audit. For example, IRS could use CDFI Fund data to categorize CDEs that invest in different types of projects or CDEs that use different types of investment structures for NMTC purposes. As the program expands and more tax return data are available for future rounds, IRS could use the audit results from its initial CDE audits, along with developing these criteria for identifying which CDEs it will audit, in order to produce compliance study results that will be more representative of the entire population of NMTC allocatees. The CDFI Fund is monitoring CDEs to ensure that they remain compliant with their allocation agreements through the New Markets Compliance Monitoring System (NCMS) and, on a more limited basis, site visits. The CDFI Fund took steps to ensure that its data collection and reporting systems are reliable and valid, such as testing its data collection systems and the interaction between these systems multiple times before using them to identify CDE noncompliance. These steps help to reasonably ensure that the CDFI Fund data are adequately maintained and properly disclosed in reports. CDFI Fund databases rely on data that CDEs self- report to the CDFI Fund. However, the CDFI Fund has several mechanisms in place, such as providing written instructions to CDEs on how to report data and providing a help desk for CDEs to call when they have questions about reporting information to the CDFI Fund, that help ensure that the data they collect are accurate and reliable. In addition, data used to populate the NCMS are subject to several validity checks to ensure accuracy. CDFI Fund officials have also conducted a limited number of site visits to CDEs, one goal of those site visits being to ensure that data are being accurately reported. Our review of the CDFI Fund’s NCMS system and site visits indicates that the CDFI Fund has instituted policies and procedures that should allow it to collect the information that it believes it needs to meet its compliance program’s objectives of identifying CDEs that are no longer compliant with their allocation agreements. According to our Government Auditing Standards, agencies should develop internal controls, including controls that will ensure that programs operate effectively and efficiently and that data collected are reliable and valid. The CDFI Fund uses the NCMS to detect allocatees’ noncompliance with their allocation agreements relating to authorized uses of NMTC allocations, restrictions on the use of NMTC allocations, and other special provisions that are included in an allocation agreement. If the NCMS identifies a CDE as being out of compliance with its allocation agreement, the CDFI Fund contacts the allocatee to let it know that the NCMS has identified it as noncompliant. The CDFI Fund officials then attempt to determine why the CDE is noncompliant and take steps necessary to bring the CDE back into compliance with the terms of its allocation agreement. As of January 2007, the CDFI Fund had identified nine CDEs that were not compliant with their allocation agreements and one CDE that was not in compliance with the NMTC program’s “substantially all” requirement. For example, in one case the CDFI Fund determined through data reported in the NCMS that the CDE was serving communities that were outside its approved service area. In this case, the areas that the CDE was investing in still qualified for NMTC investment. In response, the CDFI Fund amended the CDE’s allocation agreement by expanding the CDE’s service area. Six of the noncompliance CDEs were first round allocatees that had not, as required in their allocation agreements, issued 60 percent of their QEIs by the end of September 2006. The CDFI Fund is working with most of these allocatees to correct the problem; however, one first round allocatee has had its NMTC allocation revoked and another CDE returned its allocation as a result of not meeting this requirement. In the case where the CDFI Fund used the NCMS to identify a CDE that was failing the “substantially all” test, the CDFI Fund referred the problem to the IRS. In this case, the CDE was able to correct the problem within 6 months, the amount of time CDEs are given to correct failing the “substantially all” test, and further action was not required. The CDFI Fund developed policies and procedures for conducting site visits to CDEs where CDFI Fund officials check the validity of data reported by CDEs’ to the CDFI Fund and obtain additional information about CDEs’ efforts to remain compliant. These policies and procedures include criteria for prioritizing which allocatees warrant a site visit, the key information items to collect on a site visit, and a plan for using the information after the site visit is complete. As of November 2006, the CDFI Fund had conducted four site visits, two in 2005 and two in 2006, and indicated that it intends to conduct more visits in the future. A CDFI Fund official indicated that the CDFI Fund has plans to conduct three site visits in fiscal year 2007. So far, the CDFI Fund has visited one multiyear allocatee, one CDE that the NCMS had identified as noncompliant, a CDE that participates in other CDFI Fund programs, and a bank that received an allocation award. The process of conducting a site visit goes through several steps. A site visit can be triggered when a CDE meets one or more of the seven criteria established by the CDFI Fund, which include whether the NCMS identified the CDE as noncompliant and whether the allocatee received awards in multiple allocation rounds. Once the CDFI Fund contacts the allocatee it intends to visit, CDFI Fund officials review the data that the CDE reported to the CDFI Fund and identify any areas of concern that the CDFI Fund will investigate during the site visit. During the visit, CDFI Fund officials review other documents, such as board meeting minutes and financial documents, and conduct interviews with key staff members. CDFI Fund officials also review documentation that the CDE maintains in order to ensure that the data the CDE reported to the CDFI Fund are accurate and reliable. After the site visit is complete, CDFI Fund officials prepare a site visit report using information gathered before and during the site visit. If the CDFI Fund does not find the CDE to be in default with its allocation agreement, no further enforcement action is taken. However, if the initial CDFI Fund report finds that the CDE is not compliant with its allocation agreement, the report is passed on to CDFI Fund senior management who then either approve or disapprove the report’s finding. While these site visits do not yield generalizable results, they do supplement the information that the CDFI Fund receives through the NCMS. Unlike IRS, which must audit CDEs to determine if they are compliant with the NMTC’s laws and regulations, the CDFI Fund is able to use data reported by CDEs as its primary mechanism for reviewing CDEs’ compliance with their allocation agreements. As a result, the CDFI Fund is able to use data in the NCMS in conjunction with site visits that do not yield generalizable results in order to detect when a CDE is no longer compliant with its allocation agreement. If a CDE is determined to be noncompliant, the CDFI Fund can restrict the CDE’s access to the NMTC program. According to CDFI Fund officials, if they find a “serious occurrence of noncompliance,” such as a CDE failing to perform any of the transactions that it agreed to perform, the CDE would be found in default. To the extent possible, the CDFI Fund would assist the CDE in correcting the areas in which it was determined to be noncompliant—this could include amending or modifying the CDE’s allocation agreement. If the CDE is not able to come back into compliance, the CDFI Fund could potentially bar that CDE from future allocation rounds, or if the CDE has not yet issued all its QEIs, the CDFI Fund could revoke its ability to make additional investments using its current allocation. Thus far, the CDFI Fund has not had to take these actions against any CDE as a result of the outcome of site visits. IRS and the CDFI Fund have cooperated in their compliance efforts. As part of their response to our initial NMTC report, the CDFI Fund and IRS developed an MOU in an effort to clarify the roles and responsibilities of both with respect to monitoring NMTC compliance. IRS and the CDFI Fund have had additional discussions to identify ways for the CDFI Fund to streamline the data that it provides to IRS. While IRS and the CDFI Fund have worked together to monitor NMTC compliance, the two agencies could collect additional information that would help the IRS monitor compliance by NMTC investors, an area where neither the CDFI Fund nor IRS has chosen to dedicate resources. According to the MOU completed in 2004, the CDFI Fund is responsible for carrying out the NMTC program’s application and allocation procedures. In addition, the MOU states that the CDFI Fund will permit designated IRS staff to have access to CDFI Fund databases, provide IRS with the relevant findings and assessments of any site visits to NMTC allocatees conducted by CDFI Fund staff, and notify IRS of any potential credit recaptures. Also, on behalf of IRS, the CDFI Fund also includes compliance questions that CDEs respond to in its database regarding recapture and investments that CDEs have made in low-income communities. If the CDFI Fund determines from the answers to these questions that the CDE may be in danger of having the NMTC recaptured, it is to forward the information to IRS. According to the terms of the MOU, IRS is responsible for the collection and determination of any tax as deemed appropriate. In addition, the MOU notes that IRS is responsible for establishing processes and procedures to ensure that taxpayers are in compliance with the NMTC’s tax provisions, and IRS will provide the CDFI Fund with quarterly information, to the extent permitted by law, regarding any CDEs that fail to meet the NMTC’s legal requirements. IRS and the CDFI Fund have identified data sharing as an area where their cooperation could be improved. While IRS has access to CDFI Fund data, according to IRS officials, they have had difficulty selectively obtaining the information that they are most interested in from the CDFI Fund’s data systems. According to IRS officials, a more streamlined format for sharing data between IRS and the CDFI Fund would allow IRS to better target noncompliance. CDFI Fund officials said that they are working with IRS to develop a streamlined compliance data report, and they indicated that IRS has been cooperative in working with them. An IRS official agreed that the two agencies are working together to develop a more user-friendly data report specifically for IRS. IRS is also taking steps to increase the amount of information available about NMTC investors. IRS is in the process of finalizing a new form that will require CDEs to report to IRS the amount of QEI that NMTC investors made at the investment’s original issue. IRS currently does not have these data for all claimants because the CDFI Fund data that IRS currently uses to identify credit claimants does not track claimants in cases when the underlying QEI is sold to another investor. In addition, IRS is finalizing a second form that will require CDEs to notify the original equity investor in an NMTC investment if the credit is being recaptured. With these forms and the CDFI Fund data, IRS will have a complete record of the initial NMTC investors in a CDE and how much they invested. However, further steps could be taken to identify NMTC investors and ensure that only eligible taxpayers claim the credit and that they claim the correct amounts. NMTC investors are allowed to sell their equity share in a CDE, which determines their NMTC eligibility, to other investors after the initial investment has taken place, and neither the IRS nor the CDFI Fund tracks NMTC investors after the original investment. IRS officials indicated that the forms they are finalizing cannot be used to track the selling of an investor’s equity share in a CDE because they will not be refiled if the investment is sold to another investor after the original investment. As a result, IRS and the CDFI Fund will not be able to identify all NMTC investors and the amount of QEI that they made if an investor’s equity share in a CDE is sold after the original investment. When evaluating other tax credits, we have noted that IRS is responsible for ensuring that taxpayers claim those tax credits for which they are entitled. If IRS and the CDFI Fund developed ways to identify investors and the amounts they invested, even when NMTC investors sell their equity shares in a CDE, they would be better able to ensure that credits are claimed correctly. Our analysis of IRS and CDFI Fund data indicates that many NMTC investments may be sold after the original QEI is made in the CDE, making it difficult for IRS to identify all eligible NMTC claimants and the amounts that they are eligible to claim. When we compared potential tax credit claimants in IRS’s databases to claimants in the CDFI Fund’s database, we noted that more investors were identified as being eligible to claim the credit in IRS’s taxpayer data than in the CDFI Fund’s data on claimants when a QEI is originally issued. According to IRS, requiring individual investors to report sales of NMTC investments could place an undue burden on taxpayers. However, IRS told us that this would be useful information for its compliance monitoring efforts—both for identifying investors eligible to claim the NMTC on their tax returns and for identifying tax credit investors if IRS is forced to recapture the credits from investors when a CDE is no longer compliant with the “substantially all” requirement. The CDFI Fund already collects information from CDEs in its database identifying the initial investors and how much NMTC eligible investment has been made by investors that did not participate in tiered equity or leveraged NMTC transactions. Further, a NMTC investor with prior experience investing in CDEs and a representative of a CDE said that in their experience, CDEs are already able to identify subsequent holders of NMTC qualified equity investments when one NMTC investor sells its equity share in a CDE to another investor, and CDEs could potentially be able to report that information to the CDFI Fund or IRS. In the case where investors in a partnership that has NMTC investments sell their share in the partnership, it may be more difficult for CDEs to identify who the correct tax credit claimants would be, although the CDE would still know which partnerships own QEI in the CDE. Currently, neither IRS nor CDFI Fund data make it possible to identify completely who is eligible to claim the tax credit and how much they are entitled to claim. As more NMTC investments are being resold and complicated investment structures are becoming more common, limits on IRS’s ability to monitor investor compliance could make IRS vulnerable to a loss of tax revenues caused by taxpayer noncompliance, fraud, and abuse, and it could become increasingly difficult for IRS to identify tax credit claimants if it is forced to recapture the credit. If CDEs reported more complete information about initial NMTC investors and subsequent sales of the equity shares in the CDE that are linked to NMTC eligibility to the IRS or the CDFI Fund, IRS would have better information to track investor compliance. Investors that responded to GAO’s NMTC survey indicated that they are concerned about the possibility of the credit being recaptured and that they play an active role in ensuring that CDEs remain compliant with the laws and regulations that apply to the NMTC program. An estimated 82 percent (74.0, 89.0) of our survey respondents indicated that they are “moderately” to “very highly” concerned about the possibility that the credit could be recaptured. Nearly all investors, 97 percent, reported that they make some effort to ensure that CDEs remain compliant so that the investors avoid recapture. About 72 percent of the survey respondents said that they have regular discussion with CDEs, and 84 percent said they receive regular reports from CDEs. Nearly one-quarter of NMTC investors said that they audit the CDEs in which they made NMTC investments. Figure 10 shows the activities that NMTC investor survey respondents undertake to monitor CDE compliance. The purpose of the NMTC program is to encourage investment and development in low-income communities. Our analysis indicates that the program may be accomplishing part of that objective. In our investor survey, most participating investors said that they increased investment in low-income communities because of the credit. The statistical analysis also showed an increase in investment, with individuals adding new investment and corporations shifting funds from other uses. However, some of the survey evidence may be less consistent with the credit increasing investment (e.g., the prior experience of most NMTC investors with low-income community investment) and, because of data limitations, our statistical evidence may only establish an association between the credit and increased investment, not that the program causes the increase. In any case, the indication that the program increases investment is not sufficient to support conclusions about the program’s effectiveness, nor is the fact that the credit shifts investment an indicator of a lack of effectiveness. For example, more information is needed about the economic and social benefits that the low-income communities receive from the investment. This information is only now likely to be available given that the program’s implementation was delayed. IRS and the CDFI Fund are implementing a compliance monitoring system in the context of a program that is growing and that is attracting investors that use increasingly complex and sophisticated investment structures. As IRS moves forward with its NMTC compliance study, more rigorous development of criteria for selecting which CDEs to audit could help it better identify the most common compliance issues facing CDEs. Additionally, more complete information on who is eligible to claim the tax credit and the amounts that they are eligible to claim would be useful to IRS in helping ensure that only eligible taxpayers claim the NMTC, and a complete list of eligible NMTC claimants would assist IRS should the IRS need to recapture NMTCs. To ensure IRS is reviewing the full range of NMTC transactions and that the conclusions of its compliance study are more representative of all CDEs with NMTC allocations, we recommend that IRS use CDFI Fund data and the results of its current NMTC compliance study to develop criteria for selecting which CDEs to audit as part of its future compliance monitoring efforts. Additionally, to ensure that eligible taxpayers claim the correct amount of NMTC on their tax returns and IRS is able to identify all tax credit claimants in the event of the credit being recaptured, we recommend that IRS work with the CDFI Fund to further explore options for cost effectively monitoring investor compliance and developing a way to identify NMTC claimants, even in instances where the original investor sells its equity share in a CDE, and the amount of QEI that each investor made. We received written comments on a draft of this report from the Acting Director of the CDFI Fund and the Commissioner of Internal Revenue; their comments are reprinted in appendices IV and V. Both the IRS and the CDFI Fund agreed with our recommendations. We also incorporated technical corrections to the draft report that we received from both IRS and the CDFI Fund where appropriate. In its response to the draft report, the CDFI Fund characterized GAO’s study as indicating that the NMTC has been a highly successful tool for increasing the flow of investments into low-income communities. While our findings do suggest that the NMTC appears to increase investment by participating investors in low-income communities, we also note that further information is needed to fully assess the effectiveness of the NMTC program. We are sending copies of this report to the interested congressional committees, the Commissioner of Internal Revenue, the Director of the Community Development Financial Institutions Fund, and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report or would like additional information, please contact me at (202) 512-9110 or at [email protected]. Major contributors to this report are acknowledged in appendix VI. Based on consultations with staff at cognizant congressional committees, the objectives of this report are to (1) describe the status of the New Markets Tax Credit (NMTC) program; (2) profile the characteristics of NMTC investors, the Community Development Entities (CDE) that receive NMTC allocations, and the businesses and communities that receive NMTC investments; (3) assess how effective the NMTC has been in bringing new investment to low-income communities by the investors that have participated in the program; and (4) assess the steps that the Internal Revenue Service (IRS) and Community Development Financial Institutions (CDFI) Fund are taking to ensure CDEs and investors are complying with the NMTC and evaluate how effective these steps have been. In order to accomplish these objectives, we used a number of methods of analysis. We met with officials from the CDFI Fund and IRS. We collected documents on the program status and efforts to monitor NMTC compliance. We also analyzed data from the CDFI Fund on the CDEs and their investment in low-income communities and tax return data from tax years 1997 through 2004 for investors in the NMTC program. We used these data to report summary statistics that profile the participants in the program and to conduct statistical analysis that measures the effect of the NMTC on investment. We also surveyed investors in the NMTC program in order to provide additional information on the effect of the credit and characteristics of the investors. To evaluate investment in the CDEs by NMTC investors, we used data from the CDFI Fund’s Allocation Tracking System (ATS) on investments reported through mid-December 2006. We used the ATS data to report on the type and size of qualified equity investment (QEI) made in the CDE and the CDE that received the investment. We also used the ATS to analyze the equity investors in the NMTC program. To report on qualified low-income community investments (QLICI) from the CDE to the corresponding qualified active low-income community business (QALICB) we analyzed data from the Community Investment Impact System (CIIS). Specifically, we used data from the CIIS Transaction Level Report (TLR) for fiscal years 2003 through 2005, which provides information on each transaction made as part of a QLICI. To assess the reliability of the ATS and the TLR data sources, we reviewed the CDFI Fund’s data quality control procedures and subsequently determined that the data were sufficiently reliable for our purposes. We also reviewed tax data on NMTC investors from IRS’s Individual Returns Transaction File (IRTF) and Business Returns Transaction File (BRTF). We identified NMTC claimants using data on original claimants (at the time the QEI was made) from the CDFI Fund’s ATS and used their tax return information to determine how NMTC investors differ in size from all taxpayers. In cases where we could not locate a corporation’s tax return because the NMTC investor was a subsidiary of a larger parent corporation, we used IRS’s National Account Profile to link the subsidiary to its parent corporation. In these cases, the parent corporation’s tax return was used in our analysis. In addition, because original claimants may sell their investment, and along with it their NMTC credit, we identified further claimants as those individuals or corporations that indicated they were eligible to claim the NMTC on their tax returns. This information came from IRS’s IRTF or BRTF on the New Markets Tax Credit Form (Form 8874) or as part of the General Business Credit (Form 3800). To assess the reliability of the IRS data sources, we reviewed the IRS’s data quality control procedures and subsequently determined that the data were sufficiently reliable for our purposes. To obtain information from investors on the effectiveness of the NMTC, we designed and implemented a Web-based survey to gather information on the investors’ motivations and methods. We used CDFI Fund data and interviews with investors to determine the proper points of contact for NMTC investors. Our survey population consists of NMTC claimants and their proxies for cases in which the individual claimant was not principally responsible for deciding to make the NMTC investment. In some cases, one person was designated as the contact point for a group of investors responding to the survey. The survey asked a combination of questions that allowed for open-ended and close-ended responses. Because some investors invested with more than one CDE and because not all investors participated in tiered or leveraged investment structures, the instrument was designed with skip patterns directing investors to comment only on the prepopulated CDE and type of investment structure that they utilized. Therefore, the number of survey respondents for each question varied depending on the number of CDEs in which the investor made a QEI and whether the investor had used tiered or leveraged structures. We pretested the content and format of the questionnaire with knowledgeable investors. During the pretest, we asked the investors questions to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) the questionnaire placed an undue burden on the respondents, and (4) the questions were unbiased. We also assessed the usability of the Web-based format. We received input on the survey from a CDFI Fund official and made changes to the content and format of the final questionnaire based on pretest results. The survey was conducted using self-administered electronic questionnaires posted on the World Wide Web. We sent e-mail notifications to investors beginning on August 2, 2006. We then sent each potential respondent a unique password and user name by e-mail to ensure that only members of the target population could participate in the appropriate survey. To encourage respondents to complete the questionnaire, we sent e-mail messages to prompt each nonrespondent approximately 2 weeks and 3 weeks after the initial e-mail message. We also arranged for contract callers to do phone follow-ups from September 6 to September 8, 2006. We closed the survey on October 3, 2006. Because we attempted to collect data from every investor in the population, there was no sampling error. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, how the responses were processed and analyzed, or the types of people who do not respond can influence the accuracy of the survey results. We took steps in the development of the surveys, the data collection, and the data analysis to minimize these nonsampling errors and help ensure the accuracy of the answers that were obtained. A second, independent analyst checked all the computer programs that processed the data. The response rate for this survey was 51 percent. We conducted a nonresponse bias analysis by looking at the response rates for eight cells defined by the four types of investors surveyed (financial institutions, individuals, nonfinancial corporations, and other) and the size of the investor’s total assets (in the case of corporations) or adjusted gross income (for individuals). We collected this information primarily from the investor’s most recent tax return filed with IRS. In cases where we could not identify a tax return (primarily because the corporation had recently been acquired or merged with another corporation) we relied on public information on the corporation’s total assets from its most recent annual report. Investors were placed in one of two size categories, either less than the median or greater than the median. Individuals with adjusted gross income less than the median for individuals using the NMTC had the highest response rate at 63 percent followed by financial institutions with a response rate of 56 percent for financial institutions with income above the median and 53 percent for financial institutions with income below the median. Individuals with incomes above the median had the lowest response rate at 32 percent. Differential response rates across analytic subgroups raise the possibility of nonresponse bias. If the respondents provided different responses than the nonrespondents, the survey estimates would be biased. We have weighted the respondents by type and income to reduce this source of nonresponse bias. Unfortunately, there may be other sources of nonresponse bias that we are unaware of and unable to adjust for. A statistician used the data on size and type of investor to create weights that allowed us to project the survey responses to the entire population by assuming that the nonrespondents would have answered the questions as the respondents did. We have treated the respondents as a stratified, random sample and calculated sampling errors as an estimate of the uncertainty around the survey estimates. Ninety-five percent confidence intervals are given in parentheses after the estimates. We are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. We also used IRS tax data to develop statistical analysis that measures the effect of the NMTC on investment and addresses the question of whether NMTC investments represent new or shifted funds. Using the tax returns of NMTC investors as determined from CDFI Fund and IRS data (see above) we used a multistage sampling methodology to draw a comparison group of tax returns. These methods are more fully described in appendix II. To develop our statistical methodology, we relied on academic journal articles and interviewed experts in the research fields of individual savings and wealth and corporate taxation. To study the effectiveness of the steps that IRS and the CDFI Fund are taking to ensure CDEs and investors are complying with the NMTC and the effectiveness of these measures, we met with officials from the CDFI Fund and IRS. We also collected documents on the program status and efforts to monitor NMTC compliance. We performed our work at GAO Headquarters and the IRS office in New Carrollton, Maryland, from July 2006 through December 2006 in accordance with generally accepted government auditing standards. This appendix describes our data and methodology for assessing whether participation in the NMTC program affects investment by NMTC investors in low-income communities. The NMTC program may affect investment by increasing the overall level of investment (i.e., creating “new” investment) or by causing NMTC investors to shift investment from other uses to investment eligible for the credit. The methodology that we use to detect these changes in investment follows the methodology used in the retirement savings literature. This literature generally compares the wealth or financial assets of participants in retirement savings plans to that of nonparticipants to detect any effect of participation on savings. In our assessment of the NMTC program, we compare the wealth or assets of NMTC program participants to that of a group of similar nonparticipants to detect any effect on investment. Our statistical analysis of the effectiveness of the NMTC program in stimulating investment depends on the distinction between new and shifted investment. If our analysis detects new investment, this outcome is consistent with program goals because it may indicate increased investment in low-income communities that would not have occurred in the absence of the credit. If we do not detect new investment, it is possible that the credit has created no change in behavior and investors are just receiving a subsidy for investments that they would have made anyway, which is not consistent with the goals of the program. However, the investment could also be shifted from other communities. The implications for the effectiveness of the program in the case of shifted investment are more ambiguous. It could mean that (1) the credit has induced investors to shift investments from assets invested in other low- income communities, which means that although the credit has generated investments in projects that would not have occurred otherwise, it has not increased investment in low-income communities, or (2) NMTC investments represent funds shifted from higher income communities. The first outcome is not consistent with the NMTC’s broader goal of increasing investment in low-income communities as a whole. The second outcome is more consistent with program goals because, as with new investment, it may indicate increased investment funds available to low-income communities. Finally, in the case of both new and shifted investment, NMTC investment may reduce investment by non-NMTC investors (called crowding out) which is also inconsistent with the broader goal of the program. Our data and methodology do not allow us to detect crowding out, and for this reason, we confine our analysis to the effect of the credit on the investment behavior of participants in the NMTC program. A limitation of our statistical analysis is that in the case of no detected change in the overall level of investment, we cannot distinguish between the possible types of shifting or between shifting and the possibility that there has been no change in investment behavior. However, if we combine evidence from our survey of investors with evidence from our statistical analysis, our analysis may provide some indication that the effect of the program on investment in low-income communities by NMTC investors is shifted investment. The survey of investors that benefit from the tax credit indicated that most investments would not have occurred in the absence of the credit (inconsistent with the notion that the credit has no effect on investor behavior), and that NMTC investors had increased their investments in low-income communities because of the credit (inconsistent with the first shifting outcome above). Therefore, we use the second shifting outcome described above to interpret our statistical results in cases where we detect no overall increase in the level of investment by NMTC investors. We identified NMTC investors using both CDFI Fund data and IRS data. We collected data on original claimants (at the time the QEI was made) from the CDFI Fund. We also identified investors from IRS’s Returns Transaction File data as those claiming a positive amount for the credit on their tax returns in tax years 2001 through 2004. There were differences in the number of claimants identified from the two different sources with the IRS data resulting in more investors. The source of these differences is unclear as they could indicate incomplete CDFI Fund data, missing taxpayer identification numbers (TIN) in the CDFI Fund data, or a large turnover in credits. In the latter case, investors may not be responding to the incentives of the credit themselves but to the terms constructed by the original investor. However, this is not necessarily the case as some investors we spoke with that had purchased the credit from the original investor indicated that they intended to participate but that the original investor was necessary due to timing issues. Because of the uncertainty over which set of investors is the most relevant for our analysis, we estimated results using both the full sample (IRS and CDFI Fund claimants) and CDFI Fund claimants only. Our conclusions were the same for both groups; however, we are only reporting results for the full sample of NMTC investors identified in IRS and CDFI Fund databases. Our analysis of these data indicated that NMTC claimants were generally higher income (individuals) or had higher total assets (corporations) than the average taxpayer. This prompted us to identify our basic comparison group using a stratified random sample of taxpayers based on adjusted gross income for individuals and total assets at the end of the tax period for corporations. We oversampled high income and total asset taxpayers relative to an unstratified random sample from the same populations. We used quintiles to stratify our sample and drew a random sample of about 4,000 returns per quintile. We chose our quintiles and drew the comparison groups based on 2000 tax year data because this was the year before the credit could be claimed and in that year we would not expect any changes in behavior due to the credit. For individuals, we collected all available data from Form 1040 and information from Schedules C and F to form a panel of taxpayers for tax years 1997 through 2004. The data include more than 24,000 individual tax filers and about 80 percent of filers (including NMTC investors) are in the panel for all 8 years. For corporations, we used income data from Form 1120 and balance sheet data from Schedule L to form a panel of corporate taxpayers for tax years 1997 through 2004. These data include more than 14,000 corporate tax filers and about 56 percent of corporate filers were present in at least 7 years. (Forty-eight percent were present for all 8 years and 57 percent of NMTC investors were in all years.) Both individual and corporate NMTC investors were identified using TINs contained in CDFI Fund data and the New Markets Tax Credit Form (Form 8874) or as part of the General Business Credit (Form 3800) in the IRS data. The total number of NMTC claimants identified from these sources was 753. We also estimated asset values for individuals because, unlike IRS balance sheet data for corporations, the IRS data for individuals were limited to income streams and did not include asset levels. We followed the methods used in the Survey of Consumer Finances (SCF) to estimate asset holdings using income streams and rates of return. We also expanded on the SCF approach by using more sophisticated modeling to develop estimates of home equity. Rather than attribute to each household the median home value within its income group (as the SCF does), we estimated home equity using the November 1999 Wave (12) of the 1996 Survey of Income and Program Participation. Our controls included total income, age, marital status, and region. We then applied these estimated coefficients to tax return information on total income, age, filing status, and region of residence to generate estimates of home equity for each household using 2000 tax data. Negative values were set to zero, and the consumer price index (CPI) (research series) was used to adjust the year 2000 estimates for earlier and later years. We assessed the effects of NMTC participation by comparing the level of assets and growth in assets of NMTC participants with the level and growth in assets of corporations and individuals that did not participate in the NMTC program. We used regression techniques to compare the level of assets of NMTC investors and the relevant comparison group. The results of these models indicate whether the assets of NMTC claimants are higher than those of our comparison group controlling for other individual and corporate characteristics. However, it is possible that this approach is simply picking up the likelihood that NMTC claimants systematically have higher assets than their counterparts (despite our efforts to choose an appropriate comparison group using a stratified random sample). Therefore, we used several methods, including regression and propensity score techniques, to compare the growth of assets over time. Differences in growth rates between NMTC investors and the comparison group do not depend on differences in the level of assets. Our baseline model for corporate investors is a fixed effects model of the following form: Yit = Xitβ + µit For corporate investors, Yit represents the log of total assets, total liabilities, or net assets; Xit represents control variables which include the lag of net assets, the NMTC participation dummy, year dummies, and region dummies; and µit represents a random error term. Additional control variables are not used because they are included in the fixed effect. These variables include corporate-level characteristics, such as industry, that do not change over time. Statistical analysis of this baseline model indicates that corporate NMTC investment funds are more likely to represent investment funds shifted from other uses. Although there was some evidence that NMTC investors have higher levels of net assets than those in our comparison group, this result was not robust over different specifications of the model. On the other hand, our analysis of growth rates showed no statistically significant effect of NMTC investment status on the growth of net assets. This result means that NMTC investors are not investing at rates different from non- NMTC investors. Unlike the case of asset levels, this result was robust across several specifications involving regression and propensity score methods, as indicated in table 9. In addition, the result was qualitatively the same for each quintile, when we used only years 2001 through 2004 in the analysis, when we used median regression, and when our analysis included only banks. Further analysis included using instrumental variables for predicting participation in the NMTC. However, we did not find important differences between participants and nonparticipants based on location and participation in other general business credits. We concluded that the problem of endogeneity may not be a significant issue for corporations because corporate participants are likely to be exposed to a similar set of investment options as nonparticipants and individual corporate characteristics that affect participation are captured in the fixed effect. We also attempted to identify the source of the shifted investment funds by dividing net assets into components, total assets and total liabilities. However, these results were inconclusive as they were not consistent enough to reach any strong conclusions. A limitation of our analysis of corporations is that the amount of NMTC investment might be small relative to a corporation’s total size. This means that our statistical models could fail to detect a positive effect of the NMTC investment on corporations’ asset levels even if such an effect exists. We attempted to mitigate this problem by analyzing firm-level data, the smallest unit of analysis available, and growth in assets over time. We assessed the effect of NMTC participation on level and growth of assets for individuals in a manner similar to the analysis for corporations. Our baseline model for individual investors is a fixed effects model of the following form: yit = γNit + X1itβ + νit where y is the dependent variable, wealth, for household i at time t; N is an indicator for NMTC investment (which is endogenous, i.e., correlated with the error term); X is a set of exogenous control variables; γ and β are coefficients; and νit is an error term. However, unlike the analysis of corporate investors, we analyzed the effect of NMTC on individuals by estimating an instrumental variables version of the baseline model to account for possible endogeneity of the NMTC participation variable. We concluded that this problem is likely to be worse for individuals than for corporations because individuals are less likely to have the same information about the various business tax incentives so that the decision to participate is not random and likely to be correlated with other explanatory variables. We chose as our instrumental variables the dollar amount of allocation in the state of residence and the presence of other general business credits. These variables are likely to be highly correlated with NMTC participation but not with levels of household wealth. To implement the instrumental variables model, we first estimated N as follows: Nit = X1it β+ X2itλ + νit where X contains our instrumental variables and the other variables are defined as in the baseline model. This regression is used to predict NMTC participation using presence of a general business credit deduction and the cumulative NMTC allocation in state of residence as instrumental variables. We then estimated the baseline fixed effects model with Yit as the log of wealth and Xit as control variables, which include balance due, an NMTC participation dummy (instrumented), year dummies, and region dummies. In order to test the effect of NMTC participation on the components of wealth, we also ran regressions with Yit as the log of business assets, real estate assets, dividend assets, and interest bearing assets. Like wealth, these asset levels were measured in thousands of dollars and adjusted into constant dollars using the CPI research series. The results of this analysis for asset levels of individuals are presented in table 10. The coefficient for NMTC investor in the wealth column indicates that the log of wealth (in thousands of dollars) is significantly higher than for noninvestors. The coefficients of these regressions should not be used to generate numeric estimates of the magnitude of the effect that the NMTC has on asset levels. In some cases, the fit of our models is poor and it is difficult to estimate the value of some types of assets, in particular business assets. Our results for both the baseline analysis and propensity scoring are intended to illustrate the direction of the effect that the NMTC has on participating individuals’ investments. Nonetheless, these results show that NMTC participants have higher levels of wealth and business assets than those in the comparison group after controlling for individual fixed effects, year, region, and tax balance due— a proxy for risk attitudes. These results are consistent across four of five quintiles, using data for years 2001 through 2004 only, and using 3-year averages for the dependent variable. However, it may be that these differences in asset levels are simply picking up the likelihood that NMTC claimants systematically have higher assets that their counterparts. (The summary statistics show that individual NMTC investors have higher asset levels on average than the comparison group despite our use of a stratified random sample where comparison households were chosen based on levels of adjusted gross income in tax year 2000.) Therefore, as an alternative measure of the effect of NMTC participation, we compare the growth in assets between the two groups using closest neighbor propensity score matching to further narrow the comparison group and estimate the effect of NMTC participation on asset growth. We used year 2000 data to estimate propensity scores for future participation in the NMTC. The specification for our propensity scoring is as follows: Prob(N=1) = G(X β) Where N represents any NMTC participation from 2001 through 2004; X includes age, balance due, total income, presence of another general business credit, wage earnings, and dividend earnings; and G( · ) is the cumulative standard normal distribution. We then estimated the effect of NMTC participation on the change in the log of wealth asset levels from 2000 through 2004. Our results show that individuals who participate in the NMTC have higher growth in interest bearing assets, business assets, and wealth which is consistent with the results we obtained for our instrumental variables regressions. For example, the first column in table 11 indicates that the growth in wealth for NMTC investors was significantly higher than that of noninvestors. To develop our methodology, we relied heavily on savings literature, which generally compares the wealth or financial assets of participants in retirement savings plans to those of nonparticipants to detect any effect of participation on savings. The following list of publications provided us with important information in developing our methodological approach. 1. Engen, Eric M., and William G. Gale. “The Effects of 401(k) Plans on Household Wealth: Differences Across Earnings Groups.” NBER Working Paper No. 8032, 2000. 2. Engen, Eric M., William G. Gale, and John Karl Scholz. “Do Saving Incentives Work?” Brookings Papers on Economic Activity, vol., no.1 (1994). 3. Engen, Eric M., William G. Gale, and John Karl Scholz. “The Illusory Effects of Saving Incentives on Saving.” Journal of Economic Perspectives, vol. 10, no. 4 (1996). 4. Hubbard, R. Glenn, and Jonathan S. Skinner. “Assessing the Effectiveness of Saving Incentives.” Journal of Economic Perspectives, vol. 10, no. 4 (1996). 5. Pence, Karen M. “401(k)s and Household Saving: New Evidence from the Survey of Consumer Finances.” Finance and Economics Discussion Series 2002-6. Washington, D.C.: Board of Governors of the Federal Reserve System, 2002. 6. Poterba, James M., Steven F. Venti, and David A. Wise. “Do 401(k) Contributions Crowd Out Other Personal Saving?” Journal of Public Economics, vol. 58 (1995). 7. Poterba, James M., Steven F. Venti, and David A. Wise. “How Retirement Saving Programs Increase Saving.” Journal of Economic Perspectives, vol. 10, no. 4 (1996). 8. Poterba, James M., Steven F. Venti, and David A. Wise. “Personal Retirement Saving Programs and Asset Accumulation: Reconciling the Evidence.” NBER Working Paper No. 5599, 1996. We also consulted several experts in the course of our work, including Arthur Kennickel, Karen Pence, James Poterba, and Paul Smith, to discuss the methodology for our statistical analysis. They provided comments that we incorporated into our statistical models. In addition to the contact named above, Kevin Daly, Assistant Director; Thomas Gilbert; Evan Gilman; Tami Gurley-Calvez; Katherine Harper; Stuart Kaufman; Summer Lingard; Don Marples; Donna Miller; Ed Nannenhorn; Karen O’Conor; and Cheryl Peterson made key contributions to this report.
The Community Renewal Tax Relief Act of 2000 authorized up to $15 billion of allocation authority under the New Markets Tax Credit (NMTC) to stimulate investment in low-income communities. The act mandated that GAO report on the program to Congress by January 31, 2004, 2007, and 2010. Two subsequent laws authorized an additional $1 billion in NMTC authority for certain qualified investments and extended the program for 1 year with an additional $3.5 billion of authority. This report (1) describes the status of the NMTC program, (2) profiles NMTC program participants, (3) assesses the credit's effectiveness in attracting investment by participating investors, and (4) assesses IRS and the Community Development Financial Institutions (CDFI) Fund compliance monitoring efforts. To conduct the analysis, GAO surveyed NMTC investors, conducted statistical analysis, and interviewed IRS and CDFI Fund officials. As of January 2007, the CDFI Fund had awarded $12.1 billion of NMTC authority to 179 Community Development Entities (CDE). CDEs that received allocations began making NMTC investments in 2003, and the program has continued to grow since then. Investors use two main investment structures to make NMTC investments: direct investments to CDEs and tiered investments, which include equity investments and leveraged investments, where a portion of the investment amount originates from debt and a portion from equity. Banks and individuals constitute the largest proportion of NMTC investors, though banks and other corporations have made the largest share of NMTC investment. CDEs that received allocations applied for allocations in a competitive selection process and, through fiscal year 2005, most investment from CDEs to low-income communities had been used for either commercial real estate rehabilitation or new commercial real estate construction. The results of GAO's survey and statistical analysis indicate that the NMTC may be increasing investment in low-income communities by participating investors. Investors indicated that they have increased their investment budgets in low-income communities as a result of the credit, and GAO's analysis indicates that businesses may be shifting investment funds from other types of assets to invest in the NMTC, while individual investors may be using at least some new funds to invest in the NMTC. The CDFI Fund and IRS developed processes to monitor CDEs' compliance with their allocation agreements and the tax code. However, IRS's study of CDE compliance does not cover the full range of NMTC transactions, focusing instead on transactions that were readily available, and may not support the best decisions about enforcement in the future. Moreover, IRS and the CDFI Fund are not collecting data that would allow IRS to identify credit claimants and amounts to be claimed.
The multibillion dollar AIP provides grant funds for capital development projects at airports included in the National Plan of Integrated Airport Systems (NPIAS). In administering AIP, FAA must comply with various statutory formulas and set-asides established by law, which specify how AIP grant funds are to be distributed among airports (see app. II for a list of airports that are eligible to receive AIP grant funds). FAA groups the proposed projects into one of the following seven development categories, according to each project’s principal purpose: Safety and security includes development that is required by federal regulation and is intended primarily to protect human life. This category includes obstruction lighting and removal; fire and rescue equipment; fencing; security devices; and the construction, expansion, or improvement of a runway area. Capacity includes development that will improve an airport for the primary purpose of reducing delay and/or accommodating more passengers, cargo, aircraft operations, or based aircraft. This category includes construction of new airports; construction or extension of a runway, taxiway, or apron; and construction or expansion of a terminal building. Environment includes development to achieve an acceptable balance between airport operational requirements and the expectations of the residents of the surrounding area for a quiet and wholesome environment. This category includes noise mitigation measures for residences or public buildings, environmental mitigation projects, and the installation of noise- monitoring equipment. Planning includes development needed to identify and prioritize specific airport development needs. This category includes the airport master plan, airport layout plan, a state system plan study, or an airport feasibility study. Standards include development to bring existing airports up to FAA’s design criteria. This category includes the construction, rehabilitation, or expansion of runways, taxiways, or aprons; the installation of runway or taxiway lighting; the improvement of airport drainage; and the installation of weather reporting equipment. Reconstruction includes development to replace or rehabilitate airport facilities, primarily pavement and lighting systems that have deteriorated due to weather or use. This category includes the rehabilitation or reconstruction of runways, taxiways, apron pavement, and airfield lighting. Other includes all other development necessary for improving airport capacity and the safe and efficient operations. This category includes people movers, airport ground access projects, parking lots, fuel farms, and training systems. It also includes development for converting military airfields to civilian use, such as those authorized by the military airport program. FAA has traditionally assigned the highest priority to safety and security projects that are mandated by law or regulation. Shortly after September 11, in response to increased security requirements and in exercising the authority granted under the Federal Aviation Reauthorization Act of 1996, FAA reviewed its AIP eligibility requirements and made several changes to permit the funding of more security projects that previously had not been funded by AIP. For example, FAA broadened the list of eligible projects to include explosives detection canines, cameras in terminals, and blast proofing of terminals. According to officials in FAA’s Airport Planning and Programming Division, the types of security projects eligible for AIP funding were expanded because the perceived threat area at an airport grew from those areas immediately surrounding an aircraft to terminal areas where large numbers of people congregated. Table 1 summarizes significant eligibility changes since September 11, 2001. In November 2001, eligibility for AIP funding was further broadened by the passage of ATSA, P.L. 107-71. The act amended 49 U.S.C. Section 47102(3) to extend eligibility for AIP funding to any additional security-related activity required by law or the Secretary of Transportation after September 11, 2001, and before October 1, 2002. ATSA also created the Transportation Security Administration (TSA) within the Department of Transportation (DOT), and assigned it primary responsibility for ensuring security in all modes of transportation. As such, TSA is now responsible for funding some airport security-related projects, a limited number of which FAA had previously funded through AIP grant funds. These projects include preboard screening devices and baggage screening equipment, such as explosives detection systems. In fiscal year 2002, FAA awarded a total of $561 million in AIP grant funds for airport security projects, which represents about 17 percent of the $3.3 billion available for obligation. As illustrated in figure 1, the $561 million is the largest amount awarded for security projects in a single year and contrasts sharply with past funding trends. Since the program’s inception in 1982, security projects have accounted for an average of less than 2 percent of the total AIP grant funds awarded each year. During fiscal years 1982 through 2001, AIP grant funds awarded to airports for security projects ranged from $2 million in fiscal year 1982 to $122 million in fiscal year 1991, when airports implemented new security requirements governing access controls. The $561 million FAA awarded to airports for security projects in fiscal year 2002 represents more than 800-percent increase over the $57 million for security projects awarded in fiscal year 2001. As shown in table 2, among airport types, nearly all of the $561 million awarded in fiscal year 2002 for security projects was awarded to large, medium, small, and nonhub airports, which is consistent with where FAA has received the largest number of requests for AIP grants for security projects. General aviation and reliever airports received about 1 percent of the $561 million awarded in fiscal year 2002. Based on data provided by FAA, all security projects awarded AIP grants since September 11, 2001, have met legislative and program eligibility requirements. Most of these projects would have qualified for AIP funding under eligibility requirements in place prior to September 11, 2001. For example, as shown in table 3, perimeter fencing, surveillance and fingerprinting equipment, and access control systems, which together accounted for almost half of AIP funding for security projects, qualified under traditional eligibility regulations. Other projects that would not have qualified for AIP funding prior to September 11, 2001, such as explosives detection canines and kennels, are now eligible under legislative and administrative changes implemented since then. Section 119(a) of ATSA amended 49 U.S.C. Section 47102(3) to permit funding of any security-related activity required by law or the Secretary of Transportation after September 11, 2001, and before October 1, 2002. In addition, ATSA also amended 49 U.S.C. Section 47102(3) to make the replacement of baggage conveyor systems and terminal modifications that the Secretary determines are necessary to install explosives detection systems eligible for AIP grants. In addition to the AIP eligibility changes in ATSA, FAA issued a series of program guidance letters in the winter of 2002 that either restated or clarified project eligibility requirements as defined under 49 U.S.C. Section 47102(3). Under FAA’s Program Guidance Letter 02-2, requests for AIP grant funds for security projects after September 11, 2001, are divided into the following three categories: Unquestionably eligible projects include those that are intended to prevent unauthorized individuals from accessing the aircraft when it is parked on aprons, taxiways, runways, or any other part of the airport’s operations area. Projects eligible with additional justification include automated security announcements over public address systems and terminal improvements for checked baggage or passenger screening. Projects that appear to exceed known requirements include those related to areas of a police facility, command and control or communications centers that support general law enforcement duties, and equipment federal screeners use to screen passengers and baggage. The unprecedented increase in AIP grant funds awarded to airports for security projects in fiscal year 2002 has affected the amount of funding available for some airport development projects, in comparison with fiscal year 2001. FAA Airport Planning and Programming officials stated that they were able to fully fund many program priorities, including: all set-aside requirements, such as the noise mitigation and reduction program and the military airport program; all safety projects, including those related to FAA’s initiatives to improve runway safety and reduce runway incursions; all phased projects that had been previously funded with AIP grant funds, including the 10 runway projects which are being built at primary airports. According to FAA Planning and Programming officials, a variety of factors enabled them to reduce the impact of awarding $561 million in AIP grant funds for security projects. Most notable was the record level of carryover apportionments, which totaled $355 million, and the $84 million in grant funds that FAA recovered from prior-year projects. FAA subsequently converted these funds into discretionary funds and used $333 of the $439 million to offset the discretionary funds that were provided for security projects. The remaining $106 million was used to fund other airport development projects, such as some new capacity, standards, and reconstruction projects, which FAA initially believed it would not be able to fund because of the need to ensure that security projects were given the highest priority for AIP funding. However, when comparing grant award amounts for fiscal years 2001 and 2002, the $504-million increase in AIP grant funds for security projects in fiscal year 2002 contributed to a decrease in the amount of funding available for nonsecurity development projects. For example, as shown in table 4, the greatest reduction occurred in standards, which decreased by $156 million, from almost 30 percent of AIP funding in fiscal year 2001 to 25 percent of AIP funding in fiscal year 2002. The next largest reduction occurred in reconstruction, which decreased by $148 million, from almost 23 percent of AIP funding in fiscal year 2001 to 18 percent in fiscal year 2002. Environment, safety, and capacity projects also decreased by $97 million, $66 million, and $40 million, respectively. Airport Council International also stated that the increase in AIP funding for security has affected other airport development projects. It reported that airports have delayed almost $3 billion in airport capital development, most of which dealt with terminal developments, because of new security requirements. According to FAA Airport Planning and Programming officials, the decreases in AIP funding for the nonsecurity categories cannot be attributed solely to the increase in funding for security. For example, they stated that the decrease in the safety category occurred because the types of projects identified as necessary to comply with Part 139 safety regulations vary from year to year based on a number of factors, including the results of airport certification inspections and individual airports’ equipment retirement policies. The decline in the environment category, which includes noise mitigation, occurred, in part, because the amount of discretionary funds available in fiscal year 2002 was lower than in fiscal year 2001, according to FAA Airport Planning and Programming officials. The noise mitigation and reduction program is required by statute to receive 34 percent of available discretionary funds. The increase in AIP funding for security also affected the distribution of AIP grant funds by airport type. As shown in table 5, in comparison with fiscal year 2001, large and small hub airports received increases in AIP funding, while all other airports experienced decreases in fiscal year 2002. AIP funding to large hub airports increased by almost $111 million, or almost 4 percent of total AIP funding, while funding to small hub airports increased by almost $32 million, or 1 percent, in fiscal year 2002. In contrast, the greatest reductions in AIP funding were among nonhub airports, which decreased from almost $650 million in fiscal year 2001 to almost $510 million in fiscal year 2002, followed by reliever airports, which decreased from $213 million in fiscal year 2001 to almost $164 million in fiscal year 2002. The increase in AIP funding for security projects contributed to the decreases in the amount of funding available for some airports. For example, the increase in AIP funding to large hub airports can be attributed to their proportionally higher security needs. In the case of the decrease in AIP funding to nonhub airports, FAA Airport Planning and Programming officials said that their security needs were much lower than those of large hub airports, accounting for only $44 million, or 8 percent, of the $561 million awarded in fiscal year 2002. The unprecedented $504 million increase in funding for security also affected the LOI payment schedules that FAA planned to issue in fiscal year 2002. FAA deferred three LOI payments that were under consideration prior to September 11, 2001, that totaled $28 million, until fiscal year 2003 or later. Letters of intent are an important source of long- term funding for capacity projects at large airports. These letters represent a nonbinding commitment from FAA to provide multiyear funding to airports beyond the current authorization period. As a result, airports are able to proceed with projects without waiting for future AIP grant funds with the understanding that allowable costs will be reimbursed. The following three airports did not have discretionary funds included in their scheduled LOI payments for fiscal year 2002: Hartsfield International Airport in Atlanta, Georgia, which is the busiest airport in the country, with almost 40 million enplanements per year. It also was one of the most delayed airports in 2000 and 2001, and had $10 million for a runway extension deferred. Cincinnati/Northern Kentucky Airport in Covington, Kentucky, a large airport with 11 million enplanements per year, had $10 million deferred. Indianapolis Airport in Indianapolis, Indiana, a medium-sized airport with almost 4 million enplanements per year, had $7.5 million for a new apron and taxiway deferred. According to FAA Airport Planning and Programming officials, prior to September 11, 2001, the agency had planned to include discretionary funding in fiscal year 2002 for the LOI payments scheduled to these three airports. However, their funding has been deferred until fiscal year 2003 or later because of the need to ensure that adequate funds would be available for security projects. Nontheless, these officials stated that for each of these three airports, the letters of intent were adjusted upward to compensate the airports for the additional carrying costs they incurred because the payments were deferred. Moreover, FAA Airport Planning and Programming officials believe that reduced funding for capacity projects in fiscal year 2002 will not have dramatic consequences in the immediate future because of the current decline in passenger traffic. However, they stated that if capacity projects continue to be underfunded, the congestion and delay problems that plagued the system in 2000 and 2001 could return when the economy recovers. Similarly, FAA officials stated that although a 1-year reduction in AIP funding for reconstruction projects would not have a dramatic impact on runway pavement conditions, a sustained reduction could cause significant deterioration in pavement conditions. Finally, the effect of increasing AIP grant funds for security projects in fiscal years 2003 and beyond cannot currently be estimated with any certainty. Nonetheless, preliminary indications suggest that the total amount of funding needed for security projects in fiscal years 2003 and beyond could be substantially higher than in fiscal year 2002 and previous years. For example, security projects in the 1998 through 2002 NPIAS report to Congress totaled $143 million, while security requests in the current NPIAS, 2001 through 2005, have increased to $1.6 billion. Most of the uncertainty over how much funding is needed is dependent on pending decisions by Congress in conjunction with DOT, TSA, and FAA regarding how TSA plans to fund the terminal modifications needed to install and deploy explosives detection systems and the extent to which AIP grant funds might be needed to help cover these costs. DOT’s Inspector General testified that capital costs associated with deploying the new explosives detection systems alone could exceed $2.3 billion. Representatives of Airport Council International and the American Association of Airport Executives stated that the costs for modifying terminals and baggage conveyor system to accommodate explosives detection systems could be as high as $7 billion. In P. L. 107-206, Congress appropriated $738 million to the Transportation Security Administration for terminal modifications to install explosives detection systems. To determine how the amount of AIP grant funds awarded to airports for security projects before September 11, 2001, compared with funds awarded after September 11, we obtained AIP expenditure data for fiscal years 1982 through 2002 from FAA’s AIP database that showed the amounts of AIP grant funds awarded, the types of projects funded, and the types of airports that received the funds. To identify funding trends, we compared the amount of AIP funding awarded for security-related projects with other airport development projects for fiscal years 1998 through 2002. To develop a more realistic comparison of how much AIP funding has increased over time, we converted nominal dollar figures into constant 2002 dollars, using fiscal year price indexes constructed from gross domestic product price indexes prepared by the U.S. Department of Commerce. We subsequently discussed the data and our findings with FAA Airport Planning and Programming officials. While we verified the accuracy of the AIP expenditure data, we did not independently review the validity of FAA’s AIP database, from which the data were derived. To determine whether the new security projects met legislative and program eligibility requirements, we reviewed title 49 of U.S.C., ATSA, and FAA’s regulations and recently issued program guidance for eligibility requirements. We also interviewed FAA Airport Planning and Programming officials to clarify questions regarding eligibility requirements and to obtain additional information on the distribution of AIP grant funds. To assess how the use of AIP grant funds for security projects affected other airport development projects, we compared the amount of AIP grant funds awarded in fiscal years 2001 and 2002 by development category and airport type. We also interviewed FAA, TSA, and Airport Council International officials and reviewed the preliminary results of the Council’s survey of its members regarding changes to the status of their capital development projects due to the events of September 11, 2001. We provided the Department of Transportation with a copy of the draft report for its review and comment. FAA and TSA officials agreed with information contained in this report and provided some clarifying and technical comments that we made where appropriate. We performed our work from June through October 2002 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies to interested congressional committees; the Secretary of Transportation; the Administrator, FAA; and the Administrator, TSA. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me or Tammy Conquest at (202) 512-2834 if you have any questions. In addition, Jean Brady, Jay Cherlow, David Hooper, Nancy Lueke, and Richard Swayze made key contributions to this report. Statutory provisions require that AIP funds be apportioned by formula each year to specific airports or types of airports. Such funds are available to airports in the year they are first apportioned and they remain available for the 2 fiscal years immediately following (or 3 fiscal years for nonhub airports). Recipients of apportioned funds are primary airports, cargo service airports, states and insular areas, and Alaska. The paved part of an airport’s airfield immediately adjacent to terminal areas and hangars. Grants that are to be used for preserving or enhancing the capacity, safety, security, and carrying out noise compatibility planning and programs at primary and reliever airports. Airports that, in addition to any other air transportation services that may be available, are served by aircraft providing air transportation only of cargo with a total annual landing weight (the weight of aircraft transporting only cargo) of more than 100 million pounds. Funds apportioned for primary or cargo service airports, states, and Alaskan airports remain available for obligation during the fiscal year for which the amount was apportioned and the 2 fiscal years immediately after that year (or the 3 fiscal years immediately following that year in the case of nonhub airports). When such funds are not used in the fiscal year of the apportionment, they are carried over to following year(s). Airports that handle regularly scheduled commercial airline traffic and have at least 2,500 annual passenger enplanements. Those funds generally remaining after apportionment funds are allocated, but a number of statutory set-asides are established to achieve specified funding minimums. Passenger boardings. Airports that have no scheduled commercial passenger service. Primary airports that have at least 1 percent of all annual enplanements. A letter FAA issues to airports stating that it will reimburse them for the costs associated with an airport development project according to a defined schedule when funds become available. FAA uses this letter when its current obligating authority is not timely or adequate to meet an airport’s planned schedule for a project. Primary airports that have between .25 percent and 1 percent of all annual enplanements. Under this program, a special set-aside of the discretionary portion of AIP is to be used for capacity and/or conversion-related projects at up to 15 current and former military airports. Such airports are eligible to participate in the program for 5 fiscal years and may be extended for 5 more years if approved by the Secretary of Transportation. The airports are designated as a civil commercial service or reliever airport in the national airport system. Approved projects must be able to reduce delays at an existing commercial service airport that has more than 20,000 hours of annual delays in commercial passenger aircraft takeoffs and landings. The set of airports designated by FAA as providing an extensive network of air transportation to all parts of the country. It is comprised of commercial service airports and general aviation airports. AIP projects that reduce airport-related noise or mitigate its effects. Eligible noise projects generally fall into the following categories: land acquisition, noise insulation, runway and taxiway construction (including associated land acquisition, lighting, and navigational aids), noise- monitoring equipment, noise barriers, and miscellaneous. Primary airports that have over 10,000 annual enplanements, but less than .05 percent of all annual enplanements. An obligation occurs when FAA makes an award to an airport sponsor, thereby obligating FAA to fund a project under AIP. Airports that have between 2,500 and 10,000 annual passenger enplanements from scheduled commercial service. Airports that have 10,000 or more annual passenger enplanements from scheduled commercial service. Airports designated by FAA to relieve congestion at a commercial service airport and to provide improved general aviation access to the overall community. Only general aviation airports have been designated as reliever airports. The portion of discretionary funds set-aside designed to achieve specified funding minimums established by Congress. The passenger facility charge program requires large and medium hub airports participating in the program to return a portion of their AIP apportionment funds. Airports charging a passenger facility charge of $3.00 or less must return up to one-half of their AIP apportionment funds, and airports charging over a $3.00 passenger facility charge must return up to 75 percent of their AIP apportionment fund’s. Congress requires most of the returned AIP funds to be put in the small airport fund, which FAA redistributes to small airports. Primary airports that have from .05 percent to .25 percent of all annual enplanements. States assume responsibility for administration of AIP grants at airports classified as other than primary (other commercial service, reliever, and general aviation airports). Each state is responsible for determining which locations within its jurisdiction will receive funds and for ongoing project administration. This program is available only to selected states. AIP grants for the purpose of studying aspects of a regional or statewide airport system. These studies usually include primary and nonprimary airports. Most system planning grants are issued to metropolitan planning organizations or state aviation agencies. Paved sections of an airport’s airfield that connect runways with aprons.
The events of September 11, 2001 created several new challenges for the aviation industry in ensuring the safety and security of the national airport system. Chief among them is deciding to what extent Airport Improvement Program (AIP) grant funds should be used to finance the new security requirements at the nation's airports. Although many in the aviation industry believe that funding security projects has become even more important in the aftermath of September 11, they also recognize the need to continue funding other airport development projects, such as those designed to enhance capacity in the national airport system. During fiscal year 2002, the Federal Aviation Association (FAA) awarded a total of $561 million, 17 percent of the $3.3 billion available for grants, in AIP grant funds to airports for security projects related to the events of September 11, 2001. This amount is the largest amount awarded to airports for security projects in a single year since the program began in 1982. Based on data provided by FAA, all of the security projects funded with AIP grants since the events of September 11, 2001, met the legislative and program eligibility requirements. The projects, which range from access control systems to terminal modifications, qualified for AIP funding either under eligibility requirements in effect before September 11, 2001, or under subsequent statutory and administrative changes. Although FAA Airport Planning and Programming officials stated that they were able to comply with statutory requirements, set-asides, and other program priorities, the $504 million increase in AIP grand funds for new security projects in fiscal year 2002 has affected the amount of funds available for some airport development projects in comparison with the distribution of AIP grand funds awarded in fiscal year 2001. FAA was able to fully fund these projects, in part, because of a record level of carryover apportionments, which totaled $355 million, and the $84 million in grant funds that were recovered from prior-year projects. However, there were reductions in AIP funding awarded to nonsecurity projects in fiscal year 2002, as compared with fiscal year 2001.
Mitigation efforts are often characterized as structural—for example, building codes and flood control projects, such as dams and levees—and nonstructural—for example, land use planning, zoning, or other methods of minimizing the development of hazardous areas. A well-designed disaster mitigation program is perceived as a good way to reduce the overall exposure to risk from a disaster. For example, building codes that incorporate seismic design provisions can reduce earthquake damage. Additionally, floodplain management and building standards required by the National Flood Insurance Program may reduce future costs from flooding. For example, FEMA estimates that the building standards that apply to floodplain structures annually prevent more than $500 million in flood losses. In addition to FEMA, other federal agencies have a role in natural hazard mitigation. The Army Corps of Engineers’ major role in disaster mitigation includes providing assistance in constructing structural flood control facilities and maintaining them. According to its records, the Corps’ levees found in areas affected by the Midwest floods of 1993 prevented $7.4 billion in damage. The Tennessee Valley Authority provides information, technical data, and other assistance to promote the wise use of flood-prone areas. The Department of the Interior has mitigation responsibilities in a number of areas, including programs that help to develop scientific and technical information and procedures for reducing potential casualties and damage from earthquakes and volcanos, and a geologic-related hazards warning program that provides states and local governments with technical assistance to help ensure the timely warning of various geological disasters. The Departments of Agriculture and Commerce have roles in mitigation through their respective programs designed to conserve and develop soil and water resources and to assist states in setting up coastal management programs. As we reported in 1995, mitigation is one of three general approaches that have been proposed for reducing the costs of federal disaster assistance.For a number of reasons, including a sequence of unusually large and costly disasters, federal disaster assistance costs have soared in recent years. Obligations from the Disaster Relief Fund totaled some $3.6 billion in fiscal year 1996 and about $4.3 billion in fiscal year 1997. FEMA can influence program costs by establishing and enforcing procedures and criteria for assistance within the eligibility parameters established in statutes. We have recommended that FEMA improve program guidance and eligibility criteria in part to help control these costs. Historically, hazard mitigation has been considered primarily a responsibility of local and state governments as well as private citizens. These entities often control the decisions affecting hazard mitigation. For example, building code enforcement and land-use planning are generally under local jurisdictions. However, research suggests that, for a number of reasons, state and local governments may be reluctant to take actions to mitigate natural hazards. The reasons include local sensitivity to such measures as building code enforcement and land-use planning, conflict between hazard mitigation and development goals, the lack of an understanding of mitigation and political support, and the perception that mitigation is costly and involves solutions that are overly technical and complex. Also, while increased mitigation can be justified only to the extent to which averted losses exceed the increased costs of mitigation, mitigation policies often do not systematically compare the costs of mitigation with the losses expected to be averted, and data on which to base cost-effective mitigation may be incomplete and/or inaccurate. Individuals may also lack incentives to take mitigation measures. Studies have shown that increasing the awareness of the hazards associated with living in a certain area or previous experience with disasters do not necessarily persuade individuals to take preventive measures against future disasters. Residents of hazard-prone areas tend to treat the possibility of a disaster’s occurrence as sufficiently low to permit them to ignore the consequences. Finally, some research suggests that the availability of federal relief inhibits actions that would mitigate losses from disasters. For example, we noted in a 1980 report that the greater the degree of federal subsidization of disaster losses, the less the incentive for individuals to take action to minimize damage from natural disasters. The National Performance Review found that the availability of post-disaster federal funds may reduce incentives for mitigation. FEMA’s 1993 review of the National Earthquake Hazards Reduction Program (NEHRP) concluded that at the state level there is “the expectation that federal disaster assistance will address the problem after the event.” There are a number of approaches for addressing state and /or local governments’ reluctance to take actions to mitigate natural hazards. Our March 1995 testimony discussed recommendations by FEMA, the National Research Council, and the National Performance Review promoting the use of federal incentives to encourage hazard mitigation. For example, specific initiatives for improving earthquake mitigation included linking mitigation actions with the receipt of federal disaster and other assistance and prohibiting federally insured lenders from issuing conventional mortgages to households or businesses in an earthquake-prone area unless state or local governments have adopted or enforced appropriate seismic building standards. FEMA provides state and local governments with hazard mitigation grants and training in support of the agency’s endeavors to instill a community-based approach to implementing disaster mitigation efforts. FEMA is allowing more flexibility in targeting the agency’s grants to communities’ actual disaster risks through its agreements—called Performance Partnership Agreements—with the states. Recently, FEMA has introduced the concept of “disaster-resistant communities” through its Project Impact initiative. FEMA funds or otherwise promotes hazard mitigation through a number of programs. Under section 404 of the Robert T. Stafford Disaster Relief and Emergency Assistance Act, as amended, FEMA administers a hazard mitigation grant program. Subject to certain dollar limits, the act generally allows the President to contribute grants of up to 75 percent of the cost of hazard mitigation measures within communities that have been affected by a disaster (the states or local governments pay the remaining portion of the costs). The communities’ measures must be cost-effective and substantially reduce the risks of future damage or loss in a community. Also, under section 406 of the act, communities recovering from disasters can utilize federal funds to mitigate damaged public facilities in accordance with certain standards—such as floodplain management standards. Furthermore, section 409 of the act helps establish the requirements for a comprehensive state hazard mitigation plan. As authorized by the National Flood Insurance Act of 1968, as amended, FEMA attempts to reduce future flood losses by providing federally backed flood insurance to communities as part of its National Flood Insurance Program (NFIP). The NFIP pays for claims and operating expenses with revenues from policyholder premiums, augmented when necessary by borrowing from the Department of the Treasury. Communities are eligible for the program only if they adopt and enforce floodplain management ordinances to reduce future flood losses. As of August 1997, over 3.7 million home and business flood insurance policies were in force in more than 18,000 participating communities, representing over $403 billion worth of coverage. NFIP also funds a flood mitigation assistance program which provides funding to states and communities. In 1997, FEMA reported that it distributed $16 million in 1997 to states and communities for planning and implementing cost-effective measures to reduce future flood damage to homes and other properties that had experienced repeated losses from flooding. Eligible projects under this program include elevating structures and flood-proofing properties. FEMA also attempts to reduce flood losses through buying-out flood-prone properties throughout the country and converting the properties to open spaces. Since 1993, FEMA reports that it has committed more than $204 million to relocate 19,000 properties out of flood hazard areas in 300 communities. To help mitigate the potential loss of life and property from earthquakes, the Earthquake Hazards Reduction Act of 1977, as amended, authorizes FEMA’s provision of earthquake hazards reduction grants to states under NEHRP. (FEMA shares administration of this program with the U.S. Geological Survey, the National Science Foundation, and the National Institute of Standards and Technology.) These project grants are available only to states with moderate or higher seismic hazard and the funds can be used for a number of purposes, including implementing mitigation measures to prevent or reduce the risks of earthquakes. To conduct training, public education, and research programs in subjects related to fire protection technologies, FEMA operates the U.S. Fire Administration under the Fire Prevention and Control Act of 1974, as amended. The agency’s efforts support the nation’s fire service and emergency medical service communities through such services as the national fire incident reporting system, which collects and analyzes fire incident data. This information is then utilized to help mitigate the loss of life and damage from fires—the United States has historically had one of the highest fire loss rates (in deaths and dollar loss) of the industrialized world. In 1995, FEMA published its National Mitigation Strategy, which stresses two 15-year national goals of substantially increasing public awareness of natural hazard risk and significantly reducing the risk of loss of life, injuries, economic costs, and disruption of families and communities caused by natural hazards. The strategy calls for strengthening partnerships among all levels of government and the private sector and sets forth major initiatives, along with timelines, in the areas of (1) hazard identification and risk assessment, (2) applied research and technology transfer, (3) public awareness, training, and education, (4) incentives and resources, and (5) leadership and coordination. In 1997, FEMA began its Project Impact initiative—an effort to help protect communities, residents, organizations, businesses, infrastructure, and the stability and growth of local economies from the impact of natural disasters before they happen. The program was based on the premise that consistently building safer and stronger buildings, strengthening existing infrastructures, enforcing building codes, and making proper preparations prior to a disaster would save lives, reduce property damage, and accelerate economic recovery. The initiative intended to build “disaster-resistant communities” through public-private partnerships, and it included a national awareness campaign, the designation of pilot communities showcasing the benefits of disaster mitigation, and an outreach effort to community and business leaders. Project Impact received an appropriation of $30 million in the fiscal year 1998 budget. FEMA’s Director has stated that his goal for 1998 is to designate at least one Project Impact disaster-resistant community in each of the 50 states—expanding the list of the initial seven communities selected during 1997 to serve as pilots for the initiative. Under the Government Performance and Results Act of 1993, federal agencies must set goals, measure performance, and report on their accomplishments. FEMA’s September 1997 strategic plan, entitled “Partnership for a Safer Future,” states that the agency is concentrating its activities on reducing disaster costs through mitigation because “no other approach is as effective over the long term.” One of the strategic plan’s three goals is to “protect lives and prevent the loss of property from all hazards.” The strategic objectives under this goal are to reduce, by fiscal year 2007, (1) the risk of loss of life and injury from hazards by 10 percent and (2) the risk of property loss and economic disruption from hazards by 15 percent. To achieve these objectives, FEMA established a number of 5-year operational objectives (covering fiscal years 1998 through 2003). FEMA expects that these strategic goals and objectives will be reflected in its future performance partnership agreements with the states. To encourage the states to help meet these goals, FEMA has consolidated the mitigation programs’ grant funds into two funding streams—one for programs supported by flood policyholders’ fees (the NFIP) and another for programs supported by FEMA’s Emergency Management Planning and Assistance appropriation. Prior to fiscal year 1997, separate funding was provided for earthquake, hurricane, and state hazard mitigation programs. We have not comprehensively reviewed the implementation of FEMA’s hazard mitigation programs or analyzed the agency’s recent initiatives. However, on the basis of our past work, we believe that a number of issues are pertinent to the Congress’ consideration of the cost-effective use of federal dollars for hazard mitigation. As noted above, our work has identified a variety of approaches with potential for increasing mitigation. These include regulatory and financial incentives proposed by FEMA, the National Research Council, and the National Performance Review. Furthermore, to the extent that the availability of federal relief inhibits mitigation, amending post-disaster federal financial assistance could help prompt cost-effective mitigation. The National Performance Review, for example, recommended providing relatively more disaster assistance to states that had adopted mitigation measures than to states that had not. These or other proposals would require analysis to determine their relative costs and effectiveness. Among existing programs, it is uncertain that, collectively, federal funds are effectively targeted to projects where the risk of loss is greatest. First, it is often difficult to determine the cost-effectiveness of specific actions because of limited data concerning risks. By definition, natural hazard mitigation reduces the loss of life and property below the levels that could be expected without mitigation; however, it is impossible to know with certainty what losses would occur in the absence of mitigation. Estimating these losses requires assessments of the risks, or probabilities, of the incidence and the severity of various natural occurrences—such as tornadoes, earthquakes, hurricanes—in specific geographic areas. Such risk assessments depend on historical data that may not exist or may be difficult or costly to obtain and analyze. For example, to measure its performance in achieving its strategic objective of reducing risk by 2007, FEMA plans to use a model of the probable future loss of life and injury; risk will be measured in terms of direct and indirect dollar costs and also through assessing state and local capabilities in emergency management. Due to limited data availability, however, the model results initially will be confined to probable loss of life and injury from earthquakes. Second, federal hazard mitigation funds are provided through a number of different programs and agencies—some limited to particular hazards. Even if risks, and therefore expected benefits, could be determined more precisely, ensuring that federal dollars collectively are directed at the greatest potential benefits would require comparing alternative investments among different agencies and/or programs. Finally, it is important to note that the extent to which mitigation projects will result in federal dollar savings is uncertain; savings depend upon the actual incidence of future disaster events and the extent to which the federal government would bear the resulting losses. Without any policy change, the latter could be affected by, for example, whether the losses result from events that trigger a presidential “declaration” under the Stafford Act; if not, then the federal government may not directly bear the losses. Furthermore, policies affecting the federal share of disaster costs could change in the future. Disaster Assistance: Guidance Needed for FEMA’s “Fast Track” Housing Assistance Process (GAO/RCED-98-1, Oct. 17, 1997). Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance (GAO/T-RCED-96-166, Apr. 30, 1996). Disaster Assistance: Improvements Needed in Determining Eligibility for Public Assistance (GAO/RCED-96-113, May 23, 1996). Natural Disaster Insurance: Federal Government’s Interests Insufficiently Protected Given Its Potential Financial Exposure (GAO/T-GGD-96-41, Dec. 5, 1995). Disaster Assistance: Information on Declarations for Urban and Rural Areas (GAO/RCED-95-242, Sept. 14, 1995). Disaster Assistance: Information on Expenditures and Proposals to Improve Effectiveness and Reduce Future Costs (GAO/T-RCED-95-140, Mar. 16, 1995). GAO Work on Disaster Assistance (GAO/RCED-94-293R, Aug. 31, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed the Federal Emergency Management Agency's (FEMA) disaster mitigation efforts, focusing on: (1) the reasons why disaster mitigation efforts are not always undertaken by state and local governments and individuals; (2) FEMA's efforts to encourage mitigation; and (3) issues that GAO believes are pertinent to ensuring the cost-effective use of federal dollars for hazard mitigation. GAO noted that: (1) hazard mitigation is primarily the responsibility of state and local governments, and individuals; however, mitigation actions are not always taken; (2) the reasons for this include local sensitivity to such measures as: (a) building code enforcement and land use planning; (b) conflict between mitigation and developmental goals; and (c) individuals' perceptions that the possibility of a disaster's occurrence is low; (3) FEMA's hazard mitigation efforts include grants and training for state and local governments, funding for mitigating damage to public facilities and purchasing and converting flood-prone properties to open space, federal flood insurance, and programs targeted at reducing the loss of life and property from earthquakes and fires; (4) in recent years, FEMA has taken a strategic approach to mitigation by publishing a 15-year national mitigation strategy and establishing 5-year mitigation objectives in its strategic plan pursuant to the Government Performance and Results Act; (5) FEMA expects to reflect its strategic goal and objectives in future performance partnership agreements with states; (6) GAO's work has identified several issues pertinent to ensuring the cost-effective use of federal dollars for hazard mitigation; (7) studies have shown a variety of approaches with the potential for increasing the level of mitigation, including regulatory and financial incentives proposed by FEMA, the National Research Council, and the National Performance Review; however, these and other proposals require analysis to determine their relative costs and benefits; (8) under existing approaches, it is uncertain that, collectively, federal funds are effectively targeted to projects where the risk of loss is greatest because: (a) limitations on data needed to estimate risks often make it difficult to determine the cost-effectiveness of specific actions; and (b) federal hazard mitigation funds are provided through a number of different programs and agencies--some limited to particular hazards; and (9) the extent to which cost-effective mitigation projects will result in federal dollar savings is uncertain, depending upon the actual incidence of future disaster events and the extent to which the federal government would bear the resulting losses.
GAO has conducted various assessments related to DOD’s ISR enterprise including efforts assessing (1) unmanned aircraft system development, acquisition, and operations; (2) how new ISR requirements are generated; (3) the intelligence information processing, exploitation, and dissemination processes; and (4) other intelligence-related topics. DOD’s ISR enterprise consists of multiple intelligence organizations that individually plan for, acquire, and operate manned and unmanned airborne, space-borne, maritime, and ground-based ISR systems. The Under Secretary of Defense for Acquisition, Technology and Logistics oversees the space and unmanned aircraft systems acquisition programs. In addition to the intelligence branches of the military services, there are four major intelligence agencies within DOD: the Defense Intelligence Agency; the National Security Agency; the National Geospatial-Intelligence Agency; and the National Reconnaissance Office. The Defense Intelligence Agency is charged with providing all-source intelligence data to policy makers and U.S. armed forces around the world and provides defense human intelligence. The National Security Agency is responsible for signals intelligence and information assurance and has collection sites throughout the world. The National Geospatial-Intelligence Agency prepares the geospatial data, including maps and computerized databases, that are used by ISR systems necessary for targeting for precision-guided weapons. The National Reconnaissance Office develops and operates reconnaissance satellites. As figure 1 shows, DOD’s ISR enterprise is related to other elements of the U.S. national intelligence community. Spending on most ISR programs is divided between the defense intelligence budget, known as the Military Intelligence Program—totaling $27 billion in fiscal year 2010—and the national intelligence budget, known as the National Intelligence Program—totaling $53.1 billion in fiscal year 2010. The Military Intelligence Program encompasses DOD-wide intelligence programs and most intelligence programs supporting the operating units of the military services. The USD(I) is responsible for compiling and developing the Military Intelligence budget and issuing detailed procedures governing the Military Intelligence Program process and timelines associated with budget development. The agencies, services, and offices that are included in the Military Intelligence Program are: the Office of the Secretary of Defense, the military departments, the U.S. Special Operations Command, the Defense Intelligence Agency, the National Geospatial-Intelligence Agency, the National Reconnaissance Office, the National Security Agency, the Defense Threat Reduction Agency, the Defense Information Systems Agency, and the Defense Security Service. Each office, agency, and service designates a manager who is charged with responding to guidance from the USD(I) and managing programs and functions within the budget, among other things. The USD(I) guides and oversees the development of the Military Intelligence Program in coordination with the Under Secretary of Defense (Comptroller), Under Secretary of Defense for Policy, Under Secretary of Defense for Personnel and Readiness, Chairman of the Joint Chiefs of Staff, and the Director of the Office of the Secretary of Defense’s Office of Cost Assessment and Program Evaluation. The national intelligence community, which primarily provides support to national decision makers, also supports DOD ISR activities. The line between military intelligence activities and national strategic intelligence activities has blurred as DOD’s tactical ISR supports strategic decisions and national intelligence collection informs military operations. The National Intelligence Program, which funds national intelligence activities, also funds a portion of DOD’s ISR activities to support military operations. The Director of National Intelligence (DNI) is responsible for compiling and reviewing the annual National Intelligence Program budget. To encourage integration of DOD’s ISR enterprise, in 2003 Congress required the USD(I) to develop a comprehensive plan, known as the ISR Integration Roadmap, to guide the development and integration of ISR capabilities. The law also required the USD(I) to report back to congressional committees on this effort. In response to this requirement, DOD issued an ISR Integration Roadmap in May 2005 and updated it in January 2007. However, we reported that this 2007 roadmap still did not meet all the management elements the USD(I) was required to address. In 2008, the House Committee on Armed Services restated the need for the USD(I) to address these requirements and provided the USD(I) with additional guidance for the roadmap. The USD(I) issued an updated roadmap in March 2010. In 2008, DOD began an effort to manage ISR capabilities across the entire department, rather than by military service or individual program. Under this capability portfolio management concept, DOD intended to improve the interoperability of future capabilities, minimize capability redundancies and gaps, and maximize capability effectiveness. The USD(I) was designated as the civilian lead office for the portfolio of ISR activities, which is known as the Battlespace Awareness Portfolio. As the portfolio manager for ISR investments, the role and authorities of the USD(I) are limited to two primarily advisory functions: (1) reviewing and participating in service and DOD agency budget deliberations on proposed ISR capability investments, and (2) recommending alterations in service or agency spending to the Secretary of Defense as part of the established DOD budget review process. Also in 2008, the Secretary of Defense established the ISR Task Force to increase ISR capacity in Iraq and Afghanistan, as well as improve operational integration and efficiency of ISR assets across the services and defense agencies. The ISR Task Force’s primary focus was on regional capabilities and capabilities that could be delivered more quickly than in the standard DOD acquisition cycle. The task force is currently assisting the USD(I) and the services in deciding how to integrate into the long-term base budget more than 500 ISR capabilities that were developed to meet urgent operational requirements in Iraq and Afghanistan. We have previously reported on DOD’s challenges in improving the integration of ISR efforts, including difficulties in processing and sharing information that is already collected and developing new capabilities. We reported in 2010 that DOD’s efforts to make intelligence data accessible across the defense intelligence community have been hampered by not having integration of service programs and a concept of operations for intelligence sharing. The services have each pursued their own versions of a common data processing system to share information, the Distributed Common Ground/Surface System, which was initiated in 1998. Although the services can share limited intelligence data, their progress toward full information sharing has been uneven. Moreover, as we reported in March 2011, although DOD created the Joint Improvised Explosive Device (IED) Defeat Organization (JIEDDO) to lead and coordinate all of DOD’s counter-IED efforts, which include some ISR capabilities, many of the organizations engaged in the counter-IED defeat effort, such as the Army, Marine Corps, and Navy, continued to develop, maintain, and expand their own IED-defeat capabilities. Even though urgent operational needs include ISR capabilities, the USD(I) does not have a direct role in determining urgent operational needs. The USD(I) has the authority to exercise oversight responsibility over DOD’s ISR’s enterprise; however the broad scope and complex funding arrangements of DOD’s ISR enterprise make it difficult to manage and oversee. The scope of the ISR enterprise and capabilities include many different kinds of activities—from collection of information through dissemination of analysis compiled from multiple sources—conducted by multiple agencies. As a result, ISR activities may be funded through any of several sources, including the Military Intelligence Program, the National Intelligence Program, overseas contingency operations funding, and service appropriations, or by a combination of these sources. To manage DOD’s large ISR enterprise, the USD(I) serves as DOD’s senior intelligence official, responsible for providing strategic, budget, and policy oversight over DOD’s ISR enterprise. However, the USD(I) does not have full visibility into several budget sources that fund DOD’s ISR enterprise, such as national intelligence capabilities, capabilities used for ISR and non-ISR purposes, urgent operational needs, and military personnel expenses related to ISR. Figure 2 illustrates that the USD(I) does not have full visibility into many capabilities included in DOD’s ISR enterprise. The USD(I)’s inability to gain full visibility into all of DOD’s ISR financial resources may hinder efforts to develop an investment strategy for ISR, to consider tradeoffs across military services and programs, and to address potential duplication, fragmentation, and overlap. DOD’s ISR enterprise comprises many organizations and offices from both the defense intelligence community and the national intelligence community, which represents a challenge for DOD in integrating capabilities across the ISR enterprise. DOD relies on both its own ISR assets and national ISR assets to provide comprehensive intelligence in support of its joint warfighting force. DOD organizations are involved in providing intelligence information using their respective or joint ISR assets to both the defense and national intelligence communities. Determining the scope of the ISR enterprise precisely is difficult because the intelligence agencies and military services include different activities in discussing their ISR missions and priorities. Within DOD’s ISR enterprise, multiple organizations conduct strategic planning, budgeting, and data processing and analysis across intelligence disciplines in accordance with their own priorities. Within the Office of the Secretary of the Defense, the USD(I), and the Under Secretary of Defense for Acquisition, Technology and Logistics have responsibilities for aspects of ISR that may overlap. Specifically, DOD has designated the USD(I) to manage ISR investments as a departmentwide portfolio. However, as the ISR portfolio manager, the USD(I) has only advisory authority and cannot direct the services or agencies to make changes in their investment plans. Moreover, the Under Secretary of Defense for Acquisition, Technology and Logistics has been designated responsible for heading a task force related to the management and acquisition of unmanned aircraft systems that collect ISR data and are part of the ISR portfolio. The services and defense agencies also conduct ISR activities. The military services each have their own ISR plans and roadmaps that focus on their respective ISR activities and are not integrated with other services’ plans. For example, the Air Force maintains its own ISR plan and metrics separate from DOD’s ISR Integration Roadmap and the other service roadmaps, and the other services have developed several roadmaps outlining ISR priorities and capability gaps. Because of the broad scope of ISR and the multiple agencies involved, DOD’s ISR enterprise is funded through several budgetary sources, including both DOD and non-DOD organizations. These multiple sources of funding complicate the USD(I)’s role as the office that develops and oversees DOD’s ISR enterprise, according to DOD officials. In particular, USD(I) officials noted that the USD(I) does not have complete information on ISR funding by these organizations and that it is difficult to manage planning for ISR funding. As figures 2 and 3 show, the Military Intelligence Program, the National Intelligence Program, and military service budgets are the various sources of funding. Moreover, some ISR programs are funded through combinations of these funding sources. For example, the USD(I) does not have full visibility into space acquisitions, urgent warfighter needs, and unmanned aircraft systems acquisitions and does not routinely collect data regarding funding information. In fiscal year 2010, DOD’s ISR enterprise was funded by the entire Military Intelligence Program budget totaling $27 billion, along with a portion of the National Intelligence Program budget of $53.1 billion. In 2008, we reported that DOD and the Office of the Director of National Intelligence (ODNI) work together to coordinate funding for programs that support both military and national intelligence missions, but determining how costs for joint ISR programs will be shared can be difficult. According to DOD Directive 5143.01, the USD(I) is responsible for developing, coordinating, and overseeing the implementation of DOD policy, strategy, and guidance related to ISR. This directive also provides the USD(I) with the authority to obtain reports and information as necessary to carry out assigned responsibilities and functions. The USD(I) also has responsibility for ensuring that policies and programs related to the acquisition of ISR capabilities are designed and managed to improve performance and efficiency. GAO’s Internal Control Standards state that managers, such as the USD(I), need accurate and complete financial data to determine the effective and efficient use of resources. However, the complexity of DOD’s ISR enterprise may make the USD(I)’s management and oversight responsibilities difficult to fulfill because it is not receiving complete information and does not have full visibility over DOD’s entire ISR enterprise. The USD(I)’s lack of full visibility into the full scope of ISR capabilities, programs, and budget sources, makes it difficult for the USD(I) to receive, collect, and aggregate reports and information necessary to carry out its oversight responsibilities. We identified four areas for which the USD(I) does not have complete information on ISR spending: (1) military assets that are used for both ISR and non-ISR missions—that is, dual use assets; (2) DOD’s urgent ISR warfighter capabilities; (3) budget items funded from multiple sources; and (4) military personnel funding related to ISR missions and capabilities. Dual use assets—DOD officials stated that certain assets fulfill both non-ISR and ISR missions. Such assets are funded primarily through appropriations for the military services and may not always be reported to the USD(I) as Military Intelligence Program capabilities, which limits the USD(I)’s oversight of such capabilities and ability to make trade-offs or shift resources across the department. According to the USD(I), specific examples of dual use capabilities include the Air Force’s airborne ISR Reaper program, the Navy’s P-3 Orion land-based maritime patrol aircraft, and DOD’s biometrics program. Urgent ISR warfighter capabilities—As we reported in March 2011, GAO estimated that between fiscal years 2005 and 2010 DOD spent $6.25 billion on urgent ISR capabilities sponsored by the ISR Task Force, as well as a portion of the $19.45 billion sponsored by the JIEDDO to field new ISR capabilities. However, we also reported that DOD cannot readily identify all of its urgent needs efforts or associated costs, including spending on ISR, because it has limited visibility into the totality of urgent needs submitted by warfighters. Capabilities funded from multiple sources—DOD officials have also cited capabilities funded from multiple sources as a cause of delays in tracking and reporting ISR data. For example, many ISR capabilities are funded jointly by the Military Intelligence Program and National Intelligence Program. In addition, capabilities that have both ISR and non-ISR uses receive funding from different appropriations. For example, capabilities with both ISR and non-ISR uses can be supported by services’ operation and maintenance and personnel funding. In 2010, according to a DOD financial regulation the Under Secretary of Defense (Comptroller) and the Director of the Cost Assessment and Program Evaluation Office are to work with the USD(I) to create and maintain whole, distinct budget items within each component of the intelligence community. The military services and defense agencies are required to show measurable and steady progress towards completing this effort. On the basis of information we received from the military services, the services reported making varying progress in developing whole Military Intelligence Program budget items for some of its ISR capabilities. The services estimated that this effort will be completed sometime after 2012 and they have cited challenges in creating whole budget elements. For example, a Navy official said that it is very difficult to determine individual Military Intelligence Program and non–Military Intelligence budget portions for some capabilities at the program level. Military intelligence personnel funding related to ISR—DOD, military, and intelligence officials cited challenges in identifying exact costs associated with military personnel conducting ISR activities. In a change from previous years, DOD’s fiscal year 2012 Military Intelligence Program budget submission did not include military personnel costs. According to a USD(I) official, military personnel funding was removed from the Military Intelligence Program because: (1) military personnel expenses are not reported in the National Intelligence Program; and (2) the USD(I) does not have oversight authority for military personnel funding. Some of the military services cited military personnel costs as an example of a budget item that is split between ISR and non-ISR programs. For example, the Air Force estimates that it has approximately 200 budget items that contain at least some funding for military intelligence personnel. Additionally, Army officials reported that military personnel funding accounts for approximately 62 percent of their budget items that are funded from multiple sources. Without accurate and complete financial resource data, the USD(I) may not be able to fulfill its responsibility to develop, coordinate, and oversee the implementation of DOD’s ISR enterprise policy, strategy, and programs and manage the Battlespace Awareness capability portfolio from an informed perspective. Until the USD(I) gains more clarity over DOD’s ISR funding, DOD efforts to integrate ISR, recommend tradeoffs within the Battlespace Awareness capability portfolio, determine the effective use of ISR resources, and address potential fragmentation, overlap, and duplication will continue to be impeded. DOD has developed general guidance in directives, a manual, and memorandums emphasizing the need to identify and eliminate duplication or redundancies in its capabilities, which provides a foundation for further action. ISR activities are explicitly included as an area for possible efficiency improvements. However, current ISR efficiency studies have limited scope, initiatives are in the early stages of development, and implementation plans, including resource requirements, have not been fully developed. DOD’s broad guidance highlights the need for the services and defense agencies to work together to eliminate duplication in ISR activities. DOD’s directive Functions of the DOD and Its Major Components instructs the services to coordinate with each other in eliminating duplication, to equip forces that can work closely with each other, and to assist other components by providing intelligence. Similarly, DOD’s Capability Portfolio Management directive charges portfolio managers with identifying resource mismatches, including redundancies, and providing recommendations on integrating capabilities. In addition, DOD’s requirements process guidance instructs the services and defense agencies to identify overlaps and redundancies when proposing the development of new capabilities and to assess areas of overlap and unnecessary duplication that could be eliminated to provide resources to address capability gaps. In response to the emphasis on efficiencies, DOD, as a departmental official indicated, has recently completed one efficiency study and is developing two tools to help identify efficiencies and promote integration in its ISR enterprise; however, these efforts have limited scope or are in the early stages of development. Further, it is not clear whether the tools will result in improved efficiencies because DOD has not established implementation goals or timelines with which to establish accountability, measure progress, and build momentum. As we have previously reported, successful management efforts use implementation goals and timelines to identify performance shortfalls and gaps, suggest midcourse corrections, and build momentum by demonstrating progress. In August 2010, the Secretary of Defense directed that the department begin a series of efficiency initiatives to reduce duplication, overhead, and excess. The ISR portion of the review focused on streamlining organizations that primarily analyze intelligence information. The review group’s assessment recommended cost savings of approximately $29 million in intelligence personnel costs for fiscal year 2012 by consolidating some intelligence centers and streamlining certain intelligence organizations. However, the scope of the review was limited to ISR analysis activities and excluded ISR activities associated with collecting ISR data, which represents one of the largest areas of growth in ISR spending. ISR officials were unsure whether or when ISR collection activities would be studied for efficiencies. Two other DOD efforts are intended to address impediments to integration of the ISR enterprise management that we reported in March 2008. In our assessment of DOD’s 2007 ISR Integration Roadmap, we noted that DOD had improved its ability to look across its ISR enterprise by compiling a useful catalog of capabilities. We have previously identified a set of desirable characteristics for defense strategies such as the ISR Integration Roadmap, which are intended to enhance their usefulness in resource and policy decisions and to better assure accountability. These characteristics include laying out goals and objectives, suggesting actions for addressing those objectives, allocating resources, identifying roles and responsibilities, and integrating relevant parties. However, we reported that the 2007 Roadmap did not provide (1) a clear vision of a future integrated ISR enterprise that identifies what ISR capabilities are needed to achieve to DOD’s strategic goals, or (2) a framework for evaluating trade-offs among competing ISR capability needs and assessing how ISR investments contribute towards achieving goals. Further, we reported that the department did not have complete information on ISR capabilities in use or being developed to help identify tradeoffs among potential future investments. We recommended that DOD develop an integrated architecture and complete information to use in understanding how changing investment levels in ISR would affect progress and achieving goals and, in comments on that report, DOD agreed with our recommendation and stated that plans of action should be finalized by 2008. In 2010, USD(I) officials proposed development of a comprehensive architecture for DOD’s entire ISR enterprise, to be called the Defense Intelligence Mission Area Enterprise Architecture. This architecture is intended to provide a standardized methodology for identifying and addressing efficiencies in the ISR portfolio and support objective investment decision making. However, this initiative is in the earliest phases of development, and its concept and implementation plans including resource requirements have not been fully developed. The absence of implementation goals and timelines will make it difficult to determine whether this initiative will make progress in achieving efficiencies. In 2008, we also recommended that the Joint Staff collaborate with the USD(I) to develop a comprehensive source of information on all existing and developmental ISR capabilities throughout the ISR enterprise so that the military services and defense agencies can determine whether existing systems or those in development could fill their capability gaps. Based on this recommendation, in 2010, the Joint Staff, in collaboration with the USD(I) and the services, began an initiative to develop a comprehensive source of information on all existing and developmental ISR capabilities for use in conducting ISR-related assessments. According to Joint Staff officials, this decision support tool is designed to use measurable data to enable assessment of the relative utility and operating costs of different ISR capabilities and has the potential to identify overlap and duplication and inform trade-off decisions. Currently this tool includes information on airborne ISR capabilities. The USD(I) is currently collaborating with the Joint Staff to enhance the decision support tool to address operational requirements across ISR domains. However, it is not clear whether funding will be available to implement plans to maintain and expand the experimental tool to include all ISR capabilities and, with funding uncertain, goals and a timeline for completion have not been established. The National Defense Authorization Act for Fiscal Year 2004 required DOD to develop an ISR Integration Roadmap to guide the development and integration of DOD ISR capabilities over a 15-year period, and to report to Congress on the content of the roadmap, including specific management elements that DOD should address. In response to both of these requirements, DOD issued an ISR Integration Roadmap. In addition to other matters, DOD was required to include: (1) fundamental goals, (2) an overview of ISR integration activities, and (3) an investment strategy. The House of Representatives Committee on Armed Services provided further guidance in a 2008 committee report, after which DOD issued an updated roadmap in 2010. Our review of DOD’s 2007 and 2010 ISR roadmaps found that DOD has made progress in addressing the issues that Congress directed to be included, but neither roadmap included all the specified elements or addressed the important issue of how to invest future resources among competing priorities. As illustrated in figure 4, DOD’s 2010 ISR Integration Roadmap addressed two more required elements than did the 2007 roadmap. However, the 2010 roadmap does not represent an integrated investment strategy across the department or contain key elements of an integrated enterprise architecture, such as metrics to help evaluate trade-offs between alternatives and assess progress in addressing capability shortfalls. Further, unlike the 2007 roadmap that catalogued military and national ISR capabilities across the enterprise, the 2010 roadmap is organized by separate intelligence disciplines, such as signals intelligence and imagery intelligence, and is not integrated, making it more difficult to examine potential investments and trade-offs departmentwide. The 2010 ISR Integration Roadmap addresses four and partially addresses one of seven management elements set forth in the 2004 National Defense Authorization Act. Specifically, the 2010 ISR Integration Roadmap includes information on: A 15-year time period—The 2010 ISR Roadmap includes investment strategies for each type of intelligence activity and addresses planned capabilities through at least 2025. A description of fundamental goals—The 2010 ISR roadmap outlines broad national defense, ISR, and military goals along with missions supported by various intelligence disciplines, and contains fundamental goals such as (1) stewardship of funding, (2) serving fundamental requirements, and (3) leveraging technology effects. A description of ISR Integration activities—The 2010 ISR roadmap provides an overview of ISR integration activities across DOD such as the structure and membership of the ISR Task Force, the ISR Integration Council, and the Battlespace Awareness Functional Capabilities Board among others. A description of the role of intelligence in homeland security— The 2010 ISR roadmap contains a section outlining how intelligence can enhance DOD’s role in fulfilling its homeland security responsibilities. Counterintelligence integration—The 2010 ISR roadmap partially addresses counterintelligence integration as it generally describes DOD’s counterintelligence mission, but it does not specifically address how it will be integrated among DOD agencies and the armed forces. The 2010 ISR Integration Roadmap does not address two of the seven management elements in the 2004 National Defense Authorization Act that were also restated in the 2008 House of Representatives Committee on Armed Services report. Specifically, the 2010 Integration Roadmap does not do the following: Describe an investment strategy—The 2010 roadmap contains general strategies for individual intelligence disciplines, discusses current and future capabilities, identifies supported mission sets for each discipline, and describes long-term actions and challenges. For certain intelligence disciplines, the 2010 roadmap also generally illustrates future needs for certain capabilities. However, it does not contain a comprehensive investment strategy for the ISR enterprise. For example, it does not clearly represent what ISR capabilities are required to achieve strategic goals, and it does not allow DOD decision makers to assess current capabilities across different goals because it is structured according to individual intelligence disciplines. Additionally, the roadmap does not provide estimated costs associated with these capability needs and does not prioritize ISR capabilities. Discuss improving the structure of funding and appropriations—The 2010 roadmap does not discuss how annual funding authorizations and appropriations can be optimally structured to best support the development of a fully integrated DOD ISR architecture. DOD included a section in the roadmap entitled “Funding an Integrated ISR Architecture,” which provides an overview of the Military Intelligence Program, the National Intelligence Program, and the Battlespace Awareness Capability Portfolio, but does not include information on how annual appropriations can be best structured for ISR. The 2010 ISR Integration Roadmap also does not address the additional guidance included in the 2008 House Committee on Armed Services report. Specifically, the 2010 ISR Integration Roadmap does not address the appropriate mix of national overhead systems and manned and unmanned airborne platforms to achieve strategic goals and does not include an analysis of future ISR demand. Certain intelligence discipline sections generally describe the types of overhead capabilities needed in the future; however, these capabilities are not prioritized across the entire ISR enterprise. DOD officials acknowledged that the 2010 ISR Integration Roadmap has some limitations that DOD is planning to address in a later version. For example, because the investment strategy section is organized by intelligence area it does not address capabilities that collect multiple types of intelligence data. The USD(I) also highlighted that recent agreements between the Director of National Intelligence (DNI) and the USD(I) have resulted in the creation of the Consolidated Intelligence Guidance, which is designed to synchronize activities and investments between the DNI and DOD. They stated that this guidance is specifically goal-based and is effective for managing the shorter-term Future Years Defense Program budget. DOD officials also acknowledged that the organization of future iterations of the ISR roadmap by missions instead of intelligence discipline may better illustrate integration and interoperability of capabilities across the department. They stated that the roadmap is a living document and the intent is for future versions to create linkages between the existing DOD strategic guidance and the longer-term investment strategies. USD(I) officials stated that it is developing a useful set of metrics for the next iteration of the roadmap. Requirements of the 2004 National Defense Authorization Act, additional guidance provided by the House Committee on Armed Services, and our prior work have all emphasized that the roadmap should include a clearly defined investment strategy. Without a unified investment approach, senior DOD leaders do not have a management tool for conducting a comprehensive assessment of what investments are required to achieve ISR strategic goals, and they cannot be well-positioned to prioritize ISR investments and make resource allocation and trade-off decisions when faced with competing needs. Furthermore, until DOD develops an integrated ISR investment strategy, the defense and intelligence communities may continue to use resources that are not necessarily based on strategic priorities, which could lead to gaps in some areas of intelligence operations and redundancies in others. With demand for ISR growing and DOD planning to make additional investments in ISR capabilities, the challenges the department faces in integrating ISR capabilities, managing and conducting oversight of ISR funding, and addressing efficiency efforts will likely be exacerbated by expected budget pressures. Fragmented authority for ISR operations among multiple agencies with different, and sometimes competing, priorities hampers DOD’s progress in planning for new capabilities and targeting investments to joint priorities. The USD(I) could be better positioned to facilitate integration and provide oversight of ISR activities if it had more visibility into current capabilities and clarity into the total amount that is being spent on ISR activities funded through multiple sources. More complete information would also be useful to the USD(I) in developing an integrated ISR roadmap, including an investment strategy. DOD’s recent emphasis on efficiencies has extended to its ISR enterprise, and it has initiated efforts to identify areas of overlap and duplication. However, limitations in the scope of its current efficiency efforts and undefined goals and timelines to implement its newer efforts reduce the likelihood that all possible efficiencies will be identified and action taken to achieve them. For example, more work remains for DOD to identify efficiencies across the entire ISR enterprise, such as exploring efficiencies in ISR collection activities. Efficiency efforts in the earliest phases of development and implementation could be tools to inform decisions about trade-offs between competing priorities and may be helpful in identifying opportunities for increased efficiencies and cost savings. If designed and implemented properly, these tools could result in cost savings across the ISR enterprise by reducing the likelihood of developing unnecessarily duplicative capabilities. However, without plans for completion and timelines to build momentum, DOD will not have the ability to monitor progress and take corrective actions, if necessary, to ensure that potential savings are realized. DOD’s 2010 ISR Integration Roadmap does not provide enough detailed information on integrated goals and priorities for the ISR enterprise to enable development of a long-term investment strategy. Without a detailed investment strategy, DOD and the military services may not have a common understanding of how activities should be prioritized to meet goals. Until DOD addresses challenges related to managing funding, integrating ISR capabilities, and minimizing inefficiencies in its ISR enterprise, the department risks investing in lower-priority and even duplicative capabilities while leaving critical capability gaps unfilled. To improve management of DOD’s ISR enterprise and increase its ability to achieve efficiencies, we recommend that the Secretary of Defense direct the USD(I) to take the following three actions: Collect and aggregate complete financial data—including information on dual-use assets, urgent operational needs, capability funding from multiple sources, and military personnel funding—to inform resource and investment decisions. Establish goals and timelines to ensure progress and accountability for design and implementation of its defense intelligence enterprise architecture, including clarifying how the department plans to use the architecture and tools it is developing to achieve efficiencies. Expand the scope of current efficiency efforts to include ISR collection activities. To identify efficiencies in ISR capability development, we recommend that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff and the USD(I) to collaborate in developing decision support tool(s), such as the Joint Staff’s decision support tool, and to establish implementation goals and timelines for completion of such efforts. To ensure that future versions of the ISR Integration Roadmap meet all of the elements of an integrated ISR roadmap identified in the National Defense Authorization Act for Fiscal Year 2004 as well as the 2008 House of Representatives Committee on Armed Services report, Congress should consider establishing additional accountability in legislation, such as conditioning a portion of ISR funding on completion of all congressionally directed management elements, including the development of an integrated ISR investment strategy. In commenting on a draft of this report, DOD concurred or partially concurred with all our recommendations and stated that there are ongoing activities to address our recommendations. DOD did not agree with the matter we raised for congressional consideration. DOD’s comments are reprinted in their entirety in appendix II. In addition, DOD provided technical comments, which we have incorporated into the report as appropriate. DOD partially concurred with our recommendation that the Secretary of Defense direct the USD(I) to collect and aggregate complete financial data to inform resource and investment decisions. In its written response, DOD stated that the USD(I) is working to collect, aggregate, and expand access to complete battlespace awareness portfolio financial data to include information on dual use assets, urgent operational needs, and multiple- source funding through a variety of means and to extend its visibility over DOD’s ISR enterprise. DOD described its process for receiving complete information regarding dual use assets and urgent operational needs and discussed how it works to aggregate Military Intelligence Program data. DOD also stated that the USD(I) maintains visibility and access to programs of interest that are non–Military Intelligence Program–funded through access to DOD’s Office of Cost Assessment and Program Evaluation’s financial data warehouse. While increasing the USD(I)’s visibility into ISR programs is a positive step, we believe that formally aggregating complete intelligence-related financial data would give a better overall picture of DOD’s current ISR spending and ensure that DOD considers its entire ISR enterprise when making future resource and investment decisions. DOD concurred with our recommendation that the Secretary of Defense direct the USD(I) to establish goals and timelines to ensure progress and accountability for implementing its defense intelligence enterprise architecture. In its written comments, DOD described current efforts to develop tools—such as the Defense Intelligence Information Enterprise, the Distributed Common Ground System, and the Joint Intelligence Operations Center for Information Technology demonstration—that provide a common framework for some ISR activities and stated that goals and timelines for implementing these efforts will be displayed in the next ISR Integration Roadmap. However, DOD’s comments did not address how the department plans to integrate these separate efforts into defense intelligence architecture that would facilitate analysis of gaps and redundancies and inform future investments. Our recommendation that DOD establish goals and timelines for implementation was intended to improve management accountability for the completion of an integrated defense intelligence architecture, including clarifying how the tools it mentioned will contribute to the architecture, as well as planning how the department will use the architecture and tools to achieve efficiencies. We have revised the recommendation language to clarify its intent. DOD partially concurred with our recommendation that the Secretary of Defense direct the USD(I) to expand the scope of the current efficiency efforts to include ISR collection activities. In its written comments, DOD stated that the Secretary of Defense’s current efficiency initiatives include an effort to identify, track, and determine the future disposition of multiple intelligence organizations that were established to provide ISR support to ongoing combat operations. We agree and acknowledge in the report that current efficiency initiatives are focused on organizations that conduct analysis. The department also noted that the USD(I) is collaborating with the ODNI and the Under Secretary of Defense for Acquisition, Technology and Logistics to further ensure ISR collection investments are fully integrated in the acquisition processes of the department and intelligence community. While these efforts are positive, we maintain that formally expanding the scope of current efforts to include identification of efficiencies in ISR collection activities would help ensure that these efforts receive continued management priority. DOD also partially concurred with our recommendation that the Secretary of Defense direct the Chairman of the Joint Chiefs of Staff and the USD(I) to collaborate in developing decision support tool(s), such as the Joint Staff’s emergent decision support tool, and to establish implementation goals and timelines for completion of such efforts. DOD responded that it is exploring different portfolio management tools and will consider goals and timelines when the efficacy of such tools is verified. We agree that assessing options is an important part of developing the most effective and efficient decision support tool. However, DOD did not explain in its comments how it would consider the efficacy of the tools it plans to assess, or when it expects to choose and begin implementation of such a tool. Establishing goals and timelines for assessing the efficacy of decision support tools and taking actions to implement the selected tool could help ensure that these efforts will be fully implemented in a timely manner. DOD disagreed with our suggestion that Congress consider establishing additional accountability measures in legislation, such as conditioning funding, to encourage the department to address all the management elements Congress required in its 2004 legislation calling for an integrated ISR roadmap. In its written comments, DOD interpreted our matter as proposing the withholding of funds for ISR activities, and DOD stated that withholding funds for ISR would be counterproductive. However, we did not suggest withholding funding; rather we proposed that Congress consider using the conditioning of funding as a tool to provide an incentive for compliance with legislative requirements that have been in place since 2004—specifically, establishing fundamental ISR goals and an integrated ISR investment strategy. Since 2004, none of the ISR roadmap updates DOD has issued has fully addressed these congressionally required elements. We believe that given the substantial resources allocated to DOD’s ISR enterprise, completion of an integrated ISR roadmap that includes an investment strategy could help DOD and congressional decision makers ensure that DOD is effectively using its ISR resources. We are sending copies of this report to interested congressional committees, the Chairman of the Joint Chiefs of Staff, and the Secretary of Defense. This report will be available at no charge on GAO’s Web site, http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5431 or by e-mail at [email protected]. Contact information for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who have made major contributions to this report are listed in appendix III. To describe the challenges, if any, that the department is facing in managing costs, developing strategic plans, and identifying unnecessary fragmentation, overlap, and duplication for the intelligence, surveillance, and reconnaissance (ISR) enterprise, we reviewed and analyzed documents related to the enterprise and discussed the enterprise with cognizant Department of Defense (DOD) officials. To determine the full s DOD’s ISR funding and program elements reported in the Military Intelligence Program for Fiscal Years 2010, 2011, and 2012, conducted an analysis of DOD’s ISR spending in the Future Years Defense Program, and discussed with DOD, military service, and intelligence agency officials their ISR funding and capabilities. Specifically, we interviewed cognizant Under Secretary of Defense for Intelligence (USD) and military service officials to determine the content of DOD’s ISR enterprise, the extent to which intelligence-related costs are tracked and visible, and the resourcing challenges inherent in a complex enterprise. To determine the extent to which DOD manages the scope and cost of the ISR enterprise, we compared information obtained in these interviews against criteria documents in DOD directives related to the Military Intelligence Program and the USD(I). We reached out to DOD’s combat support agencies— including the Defense Intelligence Agency, the National Geospatial- Intelligence Agency, and the National Security Agency—as part of this effort and received high-level information regarding how they use their Military Intelligence Program funds. We also conducted a high-level discussion with the Office of the Director of National Intelligence related to processes used to identify duplication, overlap, and fragmentation within the National Intelligence Program. cope and cost of DOD’s ISR enterprise, we assessed To evaluate to what extent DOD has identified and minimized the potential for unnecessary duplication, we assessed the progress of DOD efforts to identify unnecessary fragmentation and overlap and reviewed strategic guidance, and directives for their relative emphasis and priority on unnecessary fragmentation, overlap, and duplication. We assessed to what extent DOD was addressing fragmentation and duplication in strategy documents by reviewing key strategies such as 2010 Quadrennial Defense Review and the Defense Intelligence Strategy. We also reviewed guidance related to DOD’s recent efficiency initiatives including memorandums and directives. We evaluated DOD’s guidance to determine whether it incorporated best practices on measures of accountability needed to ensure specific initiatives are fully implemented. We also asked DOD, military service, and intelligence officials to provide examples of unnecessary duplication and any actions taken to resolve them. Finally, to assess the extent to which DOD’s ISR Integration Roadm addresses congressional requirements, two analysts independently evaluated the ISR Integration Roadmap against elements identified in the 2004 National Defense Authorization Act and the House Report fr Committee on Armed Services that accompanied the 2009 National Defense Authorization Act. We determined that an if the 2010 ISR Integration Roadmap contained that element; however we did not assess the overall quality of the section(s) that addressed that element. We also compared the 2007 roadmap against these criteria show any relative differences between the two roadmap versions. We conducted interviews with knowledgeable DOD, military service, and intelligence officials to obtain information on the process to prepare the 2010 Integration Roadmap and plans for future versions of the roadmap. In addressing all of these objectives, we received briefings on DOD’s ISR enterprise and its initiatives to reduce fragmentation, overlap, and duplication in the enterprise, and we analyzed key documents related to these initiatives. We interviewed and received presentations from e the following commands and agencies about the ISR enterprise’s sco cost, strategic plans, and initiatives to reduce fragmentation, overlap, and pe, ers duplication: the USD(I); the Joint Staff; the ISR Task Force; headquart e of the Army, Air Force, Navy, and Marine Corps; Defense Intelligenc Agency; National Geospatial-Intelligence Agency; National Security Agency; and Office of the Director of National Intelligence. We conducted this performance audit from August 2010 through June 2011, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Margaret Morgan, Assistant Director; Ashley Alley; Robert Breitbeil; Gina Flacco; David Keefer; Brian Mazanec; Gregory Marchand; Timothy Persons; Amie Steele; and Cheryl Weissman made key contributions to this report.
The success of intelligence, surveillance, and reconnaissance (ISR) systems in collecting, processing, and disseminating intelligence information has fueled demand for ISR support, and the Department of Defense (DOD) has significantly increased its investments in ISR capabilities since combat operations began in 2001. In fiscal year 2010, intelligence community spending --including for ISR--exceeded $80 billion. Section 21 of Public Law 111-139 mandated that GAO identify programs, agencies, offices, and initiatives with duplicative goals and activities. This report examines the extent to which: (1) DOD manages and oversees the full scope and cost of the ISR enterprise; (2) DOD has sought to identify and minimize the potential for any unnecessary duplication in program, planning, and operations for ISR; and (3) DOD's ISR Integration Roadmap addresses key congressionally directed management elements and guidance. The Under Secretary of Defense for Intelligence (USD[I]) has the authority to oversee DOD's ISR enterprise; however, the broad scope and complex funding arrangements of DOD's ISR enterprise make it difficult to manage and oversee. The scope of the ISR enterprise and capabilities include many different kinds of activities conducted by multiple agencies. As a result, ISR activities may be funded through any of several sources, including the Military Intelligence Program, the National Intelligence Program, overseas contingency operations funding, and military service funds. To manage DOD's large ISR enterprise, the USD(I) serves as DOD's senior intelligence official, responsible for providing strategic, budget, and policy oversight over DOD's ISR enterprise. However, the USD(I) does not have full visibility into several budget sources that fund DOD's ISR enterprise, such as national intelligence capabilities, dual use assets, urgent operational needs, and military personnel expenses related to ISR. The USD(I)'s inability to gain full visibility and clarity into all of DOD's ISR financial resources hinders efforts to develop an investment strategy for ISR and to achieve efficiencies. DOD has developed general guidance in directives and other documents emphasizing the need to identify efficiencies and eliminate duplication or redundancies in its capabilities, which provides a foundation for further action. In August 2010, the Secretary of Defense directed that the department begin a series of efficiency initiatives to reduce duplication, overhead, and excess. However, the scope of the review pertaining to ISR was limited to analysis activities and excluded activities associated with collecting ISR data--one of the largest areas of growth in ISR spending. Additionally, two ISR efficiency initiatives are in the early stages of development and do not have implementation goals and timelines. Without goals and timelines, it will be difficult to determine whether these initiatives will make progress in achieving efficiencies. The National Defense Authorization Act for Fiscal Year 2004 required DOD to develop a roadmap to guide the development and integration of DOD ISR capabilities over a 15-year period and report to Congress on the contents of the roadmap, such as goals and an investment strategy to prioritize resources. DOD responded to both of these requirements by issuing an ISR roadmap. GAO's review of DOD's 2007 and 2010 ISR roadmaps found that DOD has made progress in addressing the issues that Congress directed to be included, but the 2007 and 2010 roadmaps did not address certain management elements identified by Congress. In 2008, Congress restated the 2004 requirements and provided additional guidance to the USD(I). However, the 2010 roadmap still does not represent an integrated investment strategy across the department because it does not clearly address capability gaps or priorities across the enterprise and still lacks investment information. Until DOD develops an integrated ISR investment strategy, the defense and intelligence communities may continue to make independent decisions and use resources that are not necessarily based on strategic priorities. GAO recommends that DOD compile and aggregate complete ISR funding data, establish implementation goals and timelines for its efficiency efforts, and give priority to examining efficiency in ISR collection activities. DOD agreed or partially agreed with these GAO recommendations. GAO also suggests that Congress consider holding DOD accountable to address required elements of the ISR roadmap.
For several years we have reported that DOD faces a range of financial management and related business process challenges that are complex, long-standing, pervasive, and deeply rooted in virtually all business operations throughout the department. As the Comptroller General recently testified and as discussed in our latest financial audit report, DOD’s financial management deficiencies, taken together, continue to represent the single largest obstacle to achieving an unqualified opinion on the U.S. government’s consolidated financial statements. To date, none of the military services has passed the test of an independent financial audit because of pervasive weaknesses in internal control and processes and fundamentally flawed business systems. In identifying improved financial performance as one of its five governmentwide initiatives, the President’s Management Agenda recognized that obtaining a clean (unqualified) financial audit opinion is a basic prescription for any well-managed organization. At the same time, it recognized that without sound internal control and accurate and timely financial and performance information, it is not possible to accomplish the President’s agenda and secure the best performance and highest measure of accountability for the American people. The Joint Financial Management Improvement Program (JFMIP) principals have defined certain measures, in addition to receiving an unqualified financial statement audit opinion, for achieving financial management success. These additional measures include (1) being able to routinely provide timely, accurate, and useful financial and performance information, (2) having no material internal control weaknesses or material noncompliance with laws and regulations, and (3) meeting the requirements of the Federal Financial Management Improvement Act of 1996 (FFMIA). Unfortunately, DOD does not meet any of these conditions. For example, for fiscal year 2003, the DOD Inspector General (DOD IG) issued a disclaimer of opinion on DOD’s financial statements, citing 11 material weaknesses in internal control and noncompliance with FFMIA requirements. Recent audits and investigations by GAO and DOD auditors continue to confirm the existence of pervasive weaknesses in DOD’s financial management and related business processes and systems. These problems have (1) resulted in a lack of reliable information needed to make sound decisions and report on the status of DOD activities, including accountability of assets, through financial and other reports to Congress and DOD decision makers, (2) hindered its operational efficiency, (3) adversely affected mission performance, and (4) left the department vulnerable to fraud, waste, and abuse, as the following examples illustrate. Four hundred and fifty of the 481 mobilized Army National Guard soldiers from six GAO case study Special Forces and Military Police units had at least one pay problem associated with their mobilization. DOD’s inability to provide timely and accurate payments to these soldiers, many of whom risked their lives in recent Iraq or Afghanistan missions, distracted them from their missions, imposed financial hardships on the soldiers and their families, and has had a negative impact on retention. (GAO-04-89, Nov. 13, 2003) DOD incurred substantial logistical support problems as a result of weak distribution and accountability processes and controls over supplies and equipment shipments in support of Operation Iraqi Freedom activities, similar to those encountered during the prior gulf war. These weaknesses resulted in (1) supply shortages, (2) backlogs of materials delivered in theater but not delivered to the requesting activity, (3) a discrepancy of $1.2 billion between the amount of materiel shipped and that acknowledged by the activity as received, (4) cannibalization of vehicles, and (5) duplicate supply requisitions. (GAO-04-305R, Dec. 18, 2003) Inadequate asset visibility and accountability resulted in DOD selling new Joint Service Lightweight Integrated Suit Technology (JSLIST)—the current chemical and biological protective garment used by our military forces—on the internet for $3 each (coat and trousers) while at the same time buying them for over $200 each. DOD has acknowledged that these garments should have been restricted to DOD use only and therefore should not have been available to the public. (GAO-02-873T, June 25, 2002) Inadequate asset accountability also resulted in DOD’s inability to locate and remove over 250,000 defective Battle Dress Overgarments (BDOs)— the predecessor of JSLIST—from its inventory. Subsequently, we found that DOD had sold many of these defective suits to the public, including 379 that we purchased in an undercover operation. In addition, DOD may have issued over 4,700 of the defective BDO suits to local law enforcement agencies. Although local law enforcement agencies are most likely to be the first responders to a terrorist attack, DOD failed to inform these agencies that using these BDO suits could result in death or serious injury. (GAO-04-15NI, Nov. 19, 2003) Tens of millions of dollars are not being collected each year by military treatment facilities from third-party insurers because key information required to effectively bill and collect from third-party insurers is often not properly collected, recorded, or used by the military treatment facilities. (GAO-04-322R, Feb. 20, 2004) Our analysis of data on more than 50,000 maintenance work orders opened during the deployments of six battle groups indicated that about 29,000 orders (58 percent) could not be completed because the needed repair parts were not available on board ship. This condition was a result of inaccurate ship configuration records and incomplete, outdated, or erroneous historical parts demand data. Such problems not only have a detrimental impact on mission readiness, they may also increase operational costs due to delays in repairing equipment and holding unneeded spare parts inventory. (GAO-03-887, Aug. 29, 2003) DOD sold excess biological laboratory equipment, including a biological safety cabinet, a bacteriological incubator, a centrifuge, and other items that could be used to produce biological warfare agents. Using a fictitious company and fictitious individual identities, we were able to purchase a large number of new and usable equipment items over the Internet from DOD. Although the production of biological warfare agents requires a high degree of expertise, the ease with which these items were obtained through public sales increases the risk that terrorists could obtain and use them to produce biological agents that could be used against the United States. (GAO-04-81TNI, Oct. 7, 2003) Based on statistical sampling, we estimated that 72 percent of the over 68,000 premium class airline tickets DOD purchased for fiscal years 2001 and 2002 was not properly authorized and that 73 percent was not properly justified. During fiscal years 2001 and 2002, DOD spent almost $124 million on premium class tickets that included at least one leg in premium class—usually business class. Because each premium class ticket cost the government up to thousands of dollars more than a coach class ticket, unauthorized premium class travel resulted in millions of dollars of unnecessary costs being incurred annually. (GAO-04-229T, Nov. 6, 2003) Some DOD contractors have been abusing the federal tax system with little or no consequence, and DOD is not collecting as much in unpaid taxes as it could. Under the Debt Collection Improvement Act of 1996, DOD is responsible—working with the Treasury Department—for offsetting payments made to contractors to collect funds owed, such as unpaid federal taxes. However, we found that DOD had collected only $687,000 of unpaid taxes as of September 2003. We estimated that at least $100 million could be collected annually from DOD contractors through effective implementation of levy and debt collection programs. (GAO-04- 95, Feb. 12, 2004) Our review of fiscal year 2002 data revealed that about $1 of every $4 in contract payment transactions in DOD’s Mechanization of Contract Administration Services (MOCAS) system was for adjustments to previously recorded payments—$49 billion of adjustments out of $198 billion in disbursement, collection, and adjustment transactions. According to DOD, the cost of researching and making adjustments to accounting records was about $34 million in fiscal year 2002, primarily to pay hundreds of DOD and contractor staff. (GAO-03-727, Aug. 8, 2003) DOD’s information technology (IT) budget submission to Congress for fiscal year 2004 contained material inconsistencies, inaccuracies, or omissions that limited its reliability. For example, we identified discrepancies totaling about $1.6 billion between two primary parts of the submission—the IT budget summary report and the detailed Capital Investments Reports on each IT initiative. These problems were largely attributable to insufficient management attention and limitations in departmental policies and procedures, such as guidance in DOD’s Financial Management Regulation, and to shortcomings in systems that support budget-related activities. (GAO-04-115, Dec. 19, 2003) Since the mid 1980s, we have reported that DOD uses overly optimistic planning assumptions to estimate its annual budget request. These same assumptions are reflected in its Future Years Defense Program, which reports projected spending for the current budget year and at least 4 succeeding years. In addition, in February 2004 the Congressional Budget Office projected that DOD’s demand for resources could grow to about $490 billion in fiscal year 2009. DOD’s own estimate for that same year was only $439 billion. As a result of DOD’s continuing use of optimistic assumptions, DOD has too many programs for the available dollars, which often leads to program instability, costly program stretch-outs, and program termination. Over the past few years, the mismatch between programs and budgets has continued, particularly in the area of weapons systems acquisition. For example, in January 2003, we reported that the estimated costs of developing eight major weapons systems had increased from about $47 billion in fiscal year 1998 to about $72 billion by fiscal year 2003. (GAO-03-98, January 2003) These examples clearly demonstrate not only the severity of DOD’s current problems, but also the importance of business systems modernization as a critical element in the department’s transformation efforts to improve the economy, efficiency, and effectiveness of it’s operations, and to provide for transparency and accountability to Congress and American taxpayers. Since May 1997, we have highlighted in various testimonies and reports what we believe are the underlying causes of the department’s inability to resolve its long-standing financial management and related business management weaknesses and fundamentally reform its business operations. We found that one or more of these causes were contributing factors to the financial management and related business process weaknesses we just described. Over the years, the department has undertaken many initiatives intended to transform its business operations departmentwide and improve the reliability of information for decision making and reporting but has not had much success because it has not addressed the following four underlying causes: a lack of sustained top-level leadership and management accountability for deeply embedded cultural resistance to change, including military service parochialism and stovepiped operations; a lack of results-oriented goals and performance measures and monitoring; and inadequate incentives and accountability mechanisms relating to business transformation efforts. If not properly addressed, these root causes will likely result in the failure of current DOD initiatives. DOD has not routinely assigned accountability for performance to specific organizations or individuals who have sufficient authority to accomplish desired goals. For example, under the Chief Financial Officers Act of 1990, it is the responsibility of the agency Chief Financial Officer (CFO) to establish the mission and vision for the agency’s future financial management and to direct, manage, and provide oversight of financial management operations. However, at DOD, the Comptroller—who is by statute the department’s CFO—has direct responsibility for only an estimated 20 percent of the data relied on to carry out the department’s financial management operations. The other 80 percent comes from DOD’s other business operations and is under the control and authority of other DOD officials. In addition, DOD’s past experience has suggested that top management has not had a proactive, consistent, and continuing role in integrating daily operations for achieving business transformation related performance goals. It is imperative that major improvement initiatives have the direct, active support and involvement of the Secretary and Deputy Secretary of Defense to ensure that daily activities throughout the department remain focused on achieving shared, agencywide outcomes and success. While the current DOD leadership, such as the Secretary, Deputy Secretary, and Comptroller, have certainly demonstrated their commitment to reforming the department, the magnitude and nature of day-to-day demands placed on these leaders following the events of September 11, 2001, clearly affect the level of oversight and involvement in business transformation efforts that these leaders can sustain. Given the importance of DOD’s business transformation effort, it is imperative that it receive the sustained leadership needed to improve the economy, efficiency, and effectiveness of DOD’s business operations. Based on our surveys of best practices of world-class organizations, strong executive CFO and Chief Information Officer (CIO) leadership and centralized control over systems investments are essential to (1) making financial management an entitywide priority, (2) providing meaningful information to decision makers, (3) building a team of people that delivers results, and (4) effectively leveraging technology to achieve stated goals and objectives. Cultural resistance to change, military service parochialism, and stovepiped operations have all contributed significantly to the failure of previous attempts to implement broad-based management reforms at DOD. The department has acknowledged that it confronts decades-old problems deeply grounded in the bureaucratic history and operating practices of a complex, multifaceted organization. Recent audits reveal that DOD has made only small inroads in addressing these challenges. For example, the Bob Stump National Defense Authorization Act for Fiscal Year 2003 requires the DOD Comptroller to determine that each financial system improvement meets the specific conditions called for in the act before DOD obligates funds in amounts exceeding $1 million. However, we found that most system improvement efforts involving obligations over $1 million were not reviewed by the DOD Comptroller for the purpose of making that determination and that DOD continued to lack a mechanism for proactively identifying system improvement initiatives. We asked for, but DOD did not provide, comprehensive data for obligations in excess of $1 million for business system modernization. Based on a comparison of the limited information available for fiscal years 2003 and 2004, we identified $479 million in reported obligations by the military services that were not submitted to the DOD Comptroller for review. In addition, in September 2003, we reported that DOD continued to use a stovepiped approach to develop and fund its business system investments. Specifically, we found that DOD components receive and control funding for business systems investments without being subject to the scrutiny of the DOD Comptroller. DOD’s ability to address its current “business-as- usual” approach to business system investments is further hampered by its lack of (1) a complete inventory of business systems (a condition we first highlighted in 1998), (2) a standard definition of what constitutes a business system, (3) a well-defined enterprise architecture, and (4) an effective approach for the control and accountability over business system investments. Until DOD develops and implements an effective strategy for overcoming resistance, parochialism, and stovepiped operations, its transformation efforts will not be successful. A key element of any major program is its ability to establish clearly defined goals and performance measures to monitor and report its progress to management. However, DOD has not yet established measurable, results-oriented goals to evaluate BMMP’s cost, schedule and performance outcomes and results, or explicitly defined performance measures to evaluate the architecture quality, content, and utility of subsequent major updates to its initial business enterprise architecture (BEA). For example, in our September 2003 report, we stated that DOD had not defined specific plans outlining how it intends to extend and evolve the initial BEA to include the missing scope and details that we identified. Instead, DOD’s primary BEA goal was to complete as much of the architecture as it could within a set period of time. According to DOD, it intends to refine the initial BEA through at least six different major updates of its architecture between February 2004 and the second quarter of 2005. However, it remains unclear what these major updates will individually or collectively provide and how they contribute to achieving DOD’s goals. In its March 15, 2004, progress report to defense congressional committees on the status of BMMP’s business transformation efforts, DOD reported that it plans to establish an initial approved program baseline to evaluate the cost, schedule, and performance of the BMMP. Given that DOD has reported disbursements of $111 million since development efforts began in fiscal year 2002, it is critical that it establish meaningful, tangible, and measurable program goals and objectives—short-term and long-term. Until DOD develops and implements clearly defined results-oriented goals for the overall program, including the architecture content of each major update of its architecture, the department will continue to lack a clear measure of the BMMP’s progress in transforming the department’s business operations and in providing the Congress reasonable assurance that funds are being directed towards resolving the department’s long- standing business operational problems. The final underlying cause of the department’s long-standing inability to carry out needed fundamental reform has been the lack of incentives for making more than incremental change to existing “business-as-usual” operations, systems, and organizational structures. Traditionally, DOD has focused on justifying its need for more funding rather than on the outcomes its programs have produced. DOD has historically measured its performance by resource components such as the amount of money spent, people employed, or number of tasks completed. Incentives for its decision makers to implement changed behavior have been minimal or nonexistent. The lack of incentive to change is evident in the business systems modernization area. Despite DOD’s acknowledgement that many of its systems are error prone, duplicative, and stovepiped, DOD continues to allow its component organizations to make their own investments independently of one another and implement different system solutions to solve the same business problems. These stovepiped decision-making processes have contributed to the department’s current complex, error- prone environment. The DOD Comptroller recently testified that DOD’s actual systems inventory could be twice as many as the number of systems the department currently recognizes as its systems inventory. In March 2003, we reported that ineffective program management and oversight, as well as a lack of accountability, resulted in DOD continuing to invest hundreds of millions of dollars in system modernization efforts without any assurance that the projects will produce operational improvements commensurate with the amount invested. For example, the estimated cost of one of the business system investment projects that we reviewed increased by as much as $274 million, while its schedule slipped by almost 4 years. After spending $126 million, DOD terminated that project in December 2002, citing poor performance and increasing costs. GAO and the DOD IG have identified numerous business system modernization efforts that are not economically justified on the basis of cost, benefits and risk; take years longer than planned; and fall short of delivering planned or needed capabilities. Despite this track record, DOD continues to increase spending on business systems while at the same time it lacks the effective management and oversight needed to achieve real results. Without appropriate incentives to improve their project management, ongoing oversight, and adequate accountability mechanisms, DOD components will continue to develop duplicative and nonintegrated systems that are inconsistent with the Secretary’s vision for reform. To effect real change, actions are needed to (1) break down parochialism and reward behaviors that meet DOD-wide goals, (2) develop incentives that motivate decision makers to initiate and implement efforts that are consistent with better program outcomes, including saying “no” or pulling the plug early on a system or program that is failing, and (3) facilitate a congressional focus on results-oriented management, particularly with respect to resource allocation decisions. As we have previously reported, and the success of the more narrowly defined DOD initiatives we will discuss later illustrate, the following key elements collectively will enable the department to effectively address the underlying causes of its inability to resolve its long-standing financial and business management problems. These elements are addressing the department’s financial management and related business operational challenges as part of a comprehensive, integrated, DOD-wide strategic plan for business reform; providing for sustained and committed leadership by top management, including but not limited to the Secretary of Defense; establishing resource control over business systems investments; establishing clear lines of responsibility, authority, and accountability; incorporating results-oriented performance measures and monitoring progress tied to key financial and business transformation objectives; providing appropriate incentives or consequences for action or inaction; establishing an enterprise architecture to guide and direct business systems modernization investments; and ensuring effective oversight and monitoring. These elements, which should not be viewed as independent actions but rather as a set of interrelated and interdependent actions, are reflected in the recommendations we have made to DOD and are consistent with those actions discussed in the department’s April 2001 financial management transformation report. The degree to which DOD incorporates them into its current reform efforts—both long and short term—will be a deciding factor in whether these efforts are successful. Thus far, the department’s progress in implementing our recommendations has been slow. Over the years, we have given DOD credit for beginning numerous initiatives intended to improve its business operations. Unfortunately, most of these initiatives failed to achieve their intended objective in part, we believe, because they failed to incorporate key elements that in our experience are critical to successful reform. Today, we would like to discuss one very important broad-based initiative, the BMMP, DOD currently has underway that, if properly developed and implemented, will result in significant improvements in DOD’s business operations. Within the next few months we intend to issue a report on the status of DOD’s efforts to refine and implement its enterprise architecture and the results of our review of two on going DOD system initiatives. In addition to the BMMP, DOD has undertaken several interim initiatives in recent years that have resulted in tangible, although limited, improvements. We believe that these tangible improvements were possible because DOD has accepted our recommendations and incorporated many of the key elements critical for reform. Furthermore, we would like to offer two suggestions for legislative consideration that we believe could significantly increase the likelihood of a successful business transformation effort at DOD. The BMMP, which the department established in July 2001 following our recommendation that DOD develop and implement an enterprise architecture, is vital to the department’s efforts to transform its business operations. The purpose of the BMMP is to oversee development and implementation of a departmentwide BEA, transition plan, and related efforts to ensure that DOD business system investments are consistent with the architecture. A well-defined and properly implemented BEA can provide assurance that the department invests in integrated enterprisewide business solutions and, conversely, can help move resources away from nonintegrated business system development efforts. As we reported in July 2003, DOD had developed an initial version of its departmentwide architecture for modernizing its current financial and business operations and systems and had expended tremendous effort and resources in doing so. However, substantial work remains before the architecture will be sufficiently detailed and the means for implementing it will be adequately established to begin to have a tangible impact on improving DOD’s overall business operations. We cannot overemphasize the degree of difficulty DOD faces in developing and implementing a well- defined architecture to provide the foundation that will guide its overall business transformation effort. On the positive side, during its initial efforts to develop the architecture, the department established some of the architecture management capabilities advocated by best practices and federal guidance, such as establishing a program office, designating a chief architect, and using an architecture development methodology and automated tool. Further, DOD’s initial version of its business enterprise architecture provided a foundation on which to build and ultimately produce a well-defined business enterprise architecture. For example, in September 2003, we reported that the “To Be” descriptions address, to at least some degree, how DOD intends to operate in the future, what information will be needed to support these future operations, and what technology standards should govern the design of future systems. While some progress has been made, DOD has not yet taken important steps that are critical to its ability to successfully use the enterprise architecture to drive reform throughout the department’s overall business operations. For example, DOD has not yet defined and implemented the following. Detailed plans to extend and evolve its initial architecture to include the missing scope and detail required by the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and other relevant architectural requirements. Specifically, (1) the initial version of the BEA excluded some relevant external requirements, such as requirements for recording revenue, and lacked or provided little descriptive content pertaining to its “As Is” and “To Be” environments and (2) DOD had not yet developed the transition plan needed to provide a temporal road map for moving from the “As Is” to the “To Be” environment. An effective approach to select and control business system investments for obligations exceeding $1 million. As we previously stated, and it bears repeating here, DOD components currently receive direct funding for their business systems and continue to make their own parochial decisions regarding those investments without having received the scrutiny of the DOD Comptroller as required by the Bob Stump National Defense Authorization Act for Fiscal Year of 2003. Later, we will offer a suggestion for improving the management and oversight of the billions of dollars DOD invests annually in business systems. DOD invests billions of dollars annually to operate, maintain, and modernize its business systems. For fiscal year 2004, the department requested approximately $28 billion in IT funding to support a wide range of military operations as well as DOD business systems operations, of which approximately $18.8 billion—$5.8 billion for business systems and $13 billion for business systems infrastructure—relates to the operation, maintenance, and modernization of the department’s reported thousands of business systems. The $18.8 billion is spread across the military services and defense agencies, with each receiving its own funding for IT investments. However, as we reported, DOD lacked an efficient and effective process for managing, developing, and implementing its business systems. These long-standing problems continue despite the significant investments in business systems by DOD components each year. For example, in March 2003 we reported that DOD’s oversight of four DFAS projects we reviewed had been ineffective. Investment management responsibility for the four projects rested with the Defense Finance and Accounting Service (DFAS), the DOD Comptroller, and the DOD CIO. In discharging this responsibility, each had allowed project investments to continue year after year, even through the projects had been marked by cost increases, schedule slippages, and capability changes. As a result DOD had invested approximately $316 million in four DFAS system modernization projects without demonstrating that this substantial investment would markedly improve DOD financial management information for decision making and financial reporting purposes. Specifically, we found that four DFAS projects reviewed lacked an approved economic analysis that reflected the fact that expected project costs had increased, while in some cases the benefits had decreased. For instance as we previously stated, the estimated cost of one project— referred to as the Defense Procurement Payment System (DPPS)— had increased by as much as $274 million, while its schedule slipped by almost 4 years. Such project analyses provide the requisite justification for decision makers to use in determining whether to invest additional resources in anticipation of receiving commensurate benefits and mission value. For each of the four projects we reviewed we found that DOD oversight entities—DFAS, the DOD Comptroller, and the DOD CIO—did not question the impact of the cost increases and schedule delays, and allowed the projects to proceed in the absence of the requisite analytical justification. Furthermore, in one case, they allowed a project estimated to cost $270 million, referred to as the DFAS Corporate Database/DFAS Corporate Warehouse (DCD/DCW), to proceed without an economic analysis. In another case, they allowed DPPS to continue despite known concerns about the validity of the project’s economic analysis. DOD subsequently terminated two—DPPS and the Defense Standard Disbursing System (DSDS)—of the four DFAS system modernization projects reviewed. As we previous mentioned, DPPS was terminated due to poor program performance and increasing costs after 7 years of effort and an investment of over $126 million. DFAS terminated DSDS after approximately 7 years of effort and an investment of about $53 million, noting that a valid business case for continuing the effort could not be made. These two terminated projects were planned to provide DOD the capability to address some of DOD’s long-standing contract and vendor payment problems. In addition to project management issues that continue to result in systems that do not perform as expected and cost more than planned, we found that DOD continues to lack a complete and reliable inventory of its current systems. In September 2003, we reported that DOD had created a repository of information about its existing systems inventory of approximately 2,300 business systems (up from 1,731 in October 2002) as part of its ongoing business systems modernization program, and consistent with our past recommendation. Due to its lack of visibility over systems departmentwide, DOD had to rely upon data calls to obtain its information. Unfortunately, due to its lack of an effective methodology and process for identifying business systems, including a clear definition of what constitutes a business system, DOD continues to lack assurance that its systems inventory is reliable and complete. In fact, the DOD Comptroller testified last week before the Senate Armed Services Subcommittee on Readiness and Management Support that the size of DOD’s actual systems inventory could be twice the size currently reported. This lack of visibility over current business systems in use throughout the department hinders DOD’s ability to identify and eliminate duplicate and nonintegrated systems and transition to its planned systems environment in an efficient and effective manner. Of the 2,274 business systems recorded in DOD’s systems inventory repository, the department reportedly has 665 systems to support human resource management, 565 systems to support logistical functions, 542 systems to perform finance and accounting functions, and 210 systems to support strategic planning and budget formulation. Table 1, which presents the composition of DOD business systems by functional area, reveals the numerous and redundant systems operating in the department today. As we have previously reported, these numerous systems have evolved into the overly complex and error-prone operation that exists today, including (1) little standardization across DOD components, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, (4) manual data entry into multiple systems, and (5) a large number of data translations and interfaces that combine to exacerbate problems with data integrity. The department has recognized the uncontrolled proliferation of systems and the need to eliminate as many systems as possible and integrate and standardize those that remain. In fact, the two terminated DFAS projects were intended to reduce the number of systems or eliminate a portion of different systems that perform the same function. For example, DPPS was intended to consolidate eight contract and vendor pay systems and DSDS was intended to eliminate four different disbursing systems. Until DOD completes its efforts to refine and implement its enterprise architecture and transition plan, and develop and implement an effective approach for selecting and controlling business system investments, DOD will continue to lack (1) a comprehensive and integrated strategy to guide its business process and system changes, and (2) results-oriented measures to monitor and measure progress, including whether system development and modernization investment projects adequately incorporate leading practices used by the private sector and federal requirements and achieve performance and efficiency commensurate with the cost. These elements are critical to the success of DOD’s BMMP. Developing and implementing a BEA for an organization as large and complex as DOD is a formidable challenge, but it is critical to effecting the change required to achieve the Secretary’s vision of relevant, reliable, and timely financial and other management information to support the department’s vast operations. As mandated, we plan to continue to report on DOD’s progress in developing the next version of its architecture, developing its transition plan, validating its “As Is” systems inventory, and controlling its system investments. Since DOD’s overall business process transformation is a long-term effort, in the interim it is important for the department to focus on improvements that can be made using, or requiring only minor changes to, existing automated systems and processes. As demonstrated by the examples we will highlight in this testimony, leadership, real incentives, accountability, and oversight and monitoring—key elements to successful reform—have brought about improvements in some DOD operations, such as more timely commercial payments, reduced payment recording errors, and significant reductions in individually billed travel card delinquency rates. To help achieve the department’s goal of improved financial information, the DOD Comptroller has developed a Financial Management Balanced Scorecard that is intended to align the financial community’s strategy, goals, objectives, and related performance measures with the departmentwide risk management framework established as part of DOD’s Quadrennial Defense Review, and with the President’s Management Agenda. To effectively implement the balanced scorecard, the Comptroller is planning to cascade the performance measures down to the military services and defense agency financial communities, along with certain specific reporting requirements. DOD has also developed a Web site where implementation information and monthly indicator updates will be made available for the financial communities’ review. At the departmentwide level, certain financial metrics will be selected, consolidated, and reported to the top levels of DOD management for evaluation and comparison. These “dashboard” metrics are intended to provide key decision makers, including Congress, with critical performance information at a glance, in a consistent and easily understandable format. DFAS has been reporting the metrics cited below for several years, which, under the leadership of DFAS’ Director and DOD’s Comptroller, have reported improvements, including the following. From April 2001 to January 2004, DOD reduced its commercial pay backlogs (payment delinquencies) by 55 percent. From March 2001 to December 2003, DOD reduced its payment recording errors by 33 percent. The delinquency rate for individually billed travel cards dropped from 18.4 percent in January 2001 to 10.7 percent in January 2004. Using DFAS’ metrics, management can quickly see when and where problems are arising and can focus additional attention on those areas. While these metrics show significant improvements from 2001 to today, statistics for the last few months show that progress has slowed or even taken a few steps backward for payment recording errors and commercial pay backlogs. Our report last year on DOD’s metrics program included a caution that, without modern integrated systems and the streamlined processes they engender, reported progress may not be sustainable if workload is increased. Since we reported problems with DOD’s purchase card program, DOD and the military services have taken actions to address all of our 109 recommendations. In addition, we found that DOD and the military services took action to improve the purchase card program consistent with the requirements of the Bob Stump National Defense Authorization Act for Fiscal Year 2003 and the DOD Appropriations Act for Fiscal Year 2003. Specifically, we found that DOD and the military services had done the following. Substantially reduced the number of purchase cards issued. According to GSA records, DOD had reduced the total number of purchase cards from about 239,000 in March 2001 to about 134,609 in January 2004. These reductions have the potential to significantly improve the management of this program. Issued policy guidance to field activities to (1) perform periodic reviews of all purchase card accounts to reestablish a continuing bona fide need for each card account, (2) cancel accounts that were no longer needed, and (3) devise additional controls over infrequently used accounts to protect the government from potential cardholder or outside fraudulent use. Issued disciplinary guidelines, separately, for civilian and military employees who engage in improper, fraudulent, abusive, or negligent use of a government charge card. In addition, to monitor the purchase card program, the DOD IG and the Navy have prototyped and are now expanding a data-mining capability to screen for and identify high-risk transactions (such as potentially fraudulent, improper, and abusive use of purchase cards) for subsequent investigation. On June 27, 2003, the DOD IG issued a reportsummarizing the results of an in-depth review of purchase card transactions made by 1,357 purchase cardholders. The report identified 182 cardholders who potentially used their purchase cards inappropriately or fraudulently. We believe that consistent oversight played a major role in bringing about these improvements in DOD’s purchase and travel card programs. During 2001, 2002, and 2003, seven separate congressional hearings were held on the Army and Navy purchase and individually billed travel card programs. Numerous legislative initiatives aimed at improving DOD’s management and oversight of these programs also had a positive impact. Another important initiative underway at the department pertains to financial reporting. Under the leadership of DOD Comptroller, the department is working to instill discipline into its financial reporting processes to improve the reliability of the department’s financial data. Resolution of serious financial management and related business management weaknesses is essential to achieving any opinion on the DOD consolidated financial statements. Pursuant to the requirements in section 1008 of the National Defense Authorization Act for Fiscal Year 2002, DOD has reported for the past 3 years on the reliability of the department’s financial statements, concluding that the department is not able to provide adequate evidence supporting material amounts in its financial statements. Specifically, DOD stated that it was unable to comply with applicable financial reporting requirements for (1) property, plant, and equipment, (2) inventory and operating materials and supplies, (3) environmental liabilities, (4) intragovernmental eliminations and related accounting entries, (5) disbursement activity, and (6) cost accounting by responsibility segment. Although DOD represented that the military retirement health care liability data had improved for fiscal year 2003, the cost of direct health care provided by DOD-managed military treatment facilities was a significant amount of DOD’s total recorded health care liability and was based on estimates for which adequate support was not available. DOD has indicated that by acknowledging its inability to produce reliable financial statements, as required by the act, the department saves approximately $23 million a year through reduction in the level of resources needed to prepare and audit financial statements. However, DOD has set the goal of obtaining a favorable opinion on its fiscal year 2007 departmentwide financial statements. To this end, DOD components and agencies have been tasked with addressing material line item deficiencies in conjunction with the BMMP. This is an ambitious goal and we have been requested by Congress to review the feasibility and cost effectiveness of DOD’s plans for obtaining such an opinion within the stated time frame. To instill discipline in its financial reporting process, the DOD Comptroller requires DOD’s major components to prepare quarterly financial statements along with extensive footnotes that explain any improper balances or significant variances from previous year quarterly statements. All of the statements and footnotes are analyzed by Comptroller office staff and reviewed by the Comptroller. In addition, the midyear and end-of- year financial statements must be briefed to the DOD Comptroller by the military service Assistant Secretary for Financial Management or the head of the defense agency. We have observed several of these briefings and have noted that the practice of preparing and explaining interim financial statements has led to the discovery and correction of numerous recording and reporting errors. If DOD continues to provide for active leadership, along with appropriate incentives and accountability mechanisms, improvements will continue to occur in its programs and initiatives. We would like to offer two suggestions for legislative consideration that we believe could contribute significantly to the department’s ability to not only address the impediments to DOD success but also to incorporate needed key elements to successful reform. These suggestions would include the creation of a chief management official and the centralization of responsibility and authority for business system investment decisions with the domain leaders responsible for the department’s various business areas, such as logistics and human resource management. Previous failed attempts to improve DOD’s business operations illustrate the need for sustained involvement of DOD leadership in helping to assure that the DOD’s financial and overall business process transformation efforts remain a priority. While the Secretary and other key DOD leaders have certainly demonstrated their commitment to the current business transformation efforts, the long-term nature of these efforts requires the development of an executive position capable of providing strong and sustained executive leadership over a number of years and various administrations. The day-to-day demands placed on the Secretary, the Deputy Secretary, and others make it difficult for these leaders to maintain the oversight, focus, and momentum needed to resolve the weaknesses in DOD’s overall business operations. This is particularly evident given the demands that the Iraq and Afghanistan postwar reconstruction activities and the continuing war on terrorism have placed on current leaders. Likewise, the breadth and complexity of the problems preclude the Under Secretaries, such as the DOD Comptroller, from asserting the necessary authority over selected players and business areas. While sound strategic planning is the foundation upon which to build, sustained leadership is needed to maintain the continuity needed for success. One way to ensure sustained leadership over DOD’s business transformation efforts would be to create a full-time executive level II position for a chief management official who would serve as the Principal Under Secretary of Defense for Management. This position would provide the sustained attention essential for addressing key stewardship responsibilities such as strategic planning, performance and financial management, and business systems modernization in an integrated manner, while also facilitating the overall business transformation operations within DOD. This position could be filled by an individual, appointed by the President and confirmed by the Senate, for a set term of 7 years with the potential for reappointment. Such an individual should have a proven track record as a business process change agent in large, complex, and diverse organizations—experience necessary to spearhead business process transformation across the department, and potentially administrations, and serve as an integrator for the needed business transformation efforts. In addition, this individual would enter into an annual performance agreement with the Secretary that sets forth measurable individual goals linked to overall organizational goals in connection with the department’s overall business transformation efforts. Measurable progress towards achieving agreed upon goals would be a basis for determining the level of compensation earned, including any related bonus. In addition, this individual’s achievements and compensation would be reported to Congress each year. We have made numerous recommendations to DOD intended to improve the management oversight and control of its business systems investments. However, as previously mentioned, progress in achieving this control has been slow and, as a result, DOD has little or no assurance that current business systems investments are being spent in an economically efficient and effective manner. DOD’s current systems funding process has contributed to the evolution of an overly complex and error-prone information technology environment containing duplicative, nonintegrated, and stovepiped systems. Given that DOD plans to spend approximately $19 billion on business systems and related infrastructure for fiscal year 2004—including an estimated $5 billion in modernization money—it is critical that actions be taken to gain more effective control over such business systems funding. The second suggestion we have for legislative action to address this issue, consistent with our open recommendations to DOD, is to establish specific management oversight, accountability, and control of funding with the “owners” of the various functional areas or domains. This legislation would define the scope of the various business areas (e.g., acquisition, logistics, finance and accounting) and establish functional responsibility for management of the portfolio of business systems in that area with the relevant Under Secretary of Defense for the six departmental domains and the CIO for the Enterprise Information Environment Mission (information technology infrastructure). For example, planning, development, acquisition, and oversight of DOD’s portfolio of logistics business systems would be vested in the Under Secretary of Defense for Acquisition, Technology, and Logistics. We believe it is critical that funds for DOD business systems be appropriated to the domain owners in order to provide for accountability, transparency, and the ability to prevent the continued parochial approach to systems investment that exists today. The domains would establish a hierarchy of investment review boards with DOD-wide representation, including the military services and Defense agencies. These boards would be responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for the domain portfolio, including ensuring that investments were consistent with DOD’s BEA. All domain owners would be responsible for coordinating their business systems investments with the chief management official who would chair the Defense Business Systems Modernization Executive Committee and provide a cross-domain perspective. Domain leaders would also be required to report to Congress through the chief management official and the Secretary of Defense on applicable business systems that are not compliant with review requirements and to include a summary justification for noncompliance. As seen again in Iraq, the excellence of our military forces is unparalleled. However, that excellence is often achieved in the face of enormous challenges in DOD’s financial management and other business areas, which have serious and far-reaching implications related to the department’s operations and critical national defense mission. Our recent work has shown that DOD’s long-standing financial management and business problems have resulted in fundamental operational problems, such as failure to properly pay mobilized Army Guard soldiers and the inability to provide adequate accountability and control over supplies and equipment shipments in support of Operation Iraqi Freedom. Further, the lack of adequate transparency and appropriate accountability across all business areas has resulted in certain fraud, waste, and abuse and hinders DOD’s attempts to develop world-class operations and activities to support its forces. As our nation continues to be challenged with growing budget deficits and increasing pressure to reduce spending levels, every dollar that DOD can save through improved economy and efficiency of its operations is important. DOD’s senior leaders have demonstrated a commitment to transforming the department and improving its business operations and have taken positive steps to begin this effort. We believe that implementation of our open recommendations and our suggested legislative initiatives would greatly improve the likelihood of meaningful, broad-based reform at DOD. The continued involvement and monitoring by congressional committees will also be critical to ensure that DOD’s initial transformation actions are sustained and extended and that the department achieves its goal of securing the best performance and highest measure of accountability for the American people. We commend the Subcommittee for holding this hearing and we encourage you to use this vehicle, on at least an annual basis, as a catalyst for long overdue business transformation at DOD. Mr. Chairman, this concludes our statement. We would be pleased to answer any questions you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact Gregory D. Kutz at (202) 512-9095 or [email protected], Randolph Hite at (202) 512-3439 or [email protected], or Evelyn Logue at 202-512-3881. Other key contributors to this testimony include Bea Alff, Meg Best, Molly Boyle, Art Brouk, Cherry Clipper, Mary Ellen Chervenic, Francine Delvecchio, Abe Dymond, Eric Essig, Gayle Fischer, Geoff Frank, John Kelly, Patricia Lentini, Elizabeth Mead, Mai Nguyen, Greg Pugnetti, Cary Russell, John Ryan, Darby Smith, Carolyn Voltz, Marilyn Wasleski, and Jenniffer Wilson. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
GAO has issued several reports pertaining to the Department of Defense's (DOD) architecture and systems modernization efforts which revealed that many of the underlying conditions that contributed to the failure of prior DOD efforts to improve its business systems remain fundamentally unchanged. The Subcommittee on Terrorism, Unconventional Threats and Capabilities, House Committee on Armed Services, asked GAO to provide its perspectives on (1) the impact long-standing financial and related business weaknesses continue to have on DOD, (2) the underlying causes of DOD business transformation challenges, and (3) DOD business transformation efforts. In addition, GAO reiterates the key elements to successful reform: (1) an integrated business transformation strategy, (2) sustained leadership and resource control, (3) clear lines of responsibility and accountability, (4) results-oriented performance, (5) appropriate incentives and consequences, (6) an enterprise architecture to guide reform efforts, and (7) effective monitoring and oversight. GAO also offers two suggestions for legislative consideration that are intended to improve the likelihood of meaningful, broad-based financial management and related business reform at DOD. DOD's senior civilian and military leaders are committed to transforming the department and improving its business operations and have taken positive steps to begin this effort. However, overhauling the financial management and related business operations of one of the largest and most complex organizations in the world represents a huge management challenge. Six DOD program areas are on GAO's "high risk" list, and the department shares responsibility for three other governmentwide high-risk areas. DOD's substantial financial and business management weaknesses adversely affect not only its ability to produce auditable financial information, but also to provide timely, reliable information for management and Congress to use in making informed decisions. Further, the lack of adequate transparency and appropriate accountability across all of DOD's major business areas results in billions of dollars in annual wasted resources in a time of increasing fiscal constraint. Four underlying causes impede reform: (1) lack of sustained leadership, (2) cultural resistance to change, (3) lack of meaningful metrics and ongoing monitoring, and (4) inadequate incentives and accountability mechanisms. To address these issues, GAO reiterates the keys to successful business transformation and offers two suggestions for legislative action. First GAO suggests that a senior management position be established to spearhead DOD-wide business transformation efforts. Second, GAO proposes that the leaders of DOD's functional areas, referred to as departmentwide domains, receive and control the funding for system investments, as opposed to the military services. Domain leaders would be responsible for managing business system and process reform efforts within their business areas and would be accountable to the new senior management official for ensuring their efforts comply with DOD's business enterprise architecture.
Data-driven performance reviews are regularly scheduled, structured meetings used by organizational leaders and managers to review and analyze data on progress toward key performance goals and other management-improvement priorities. They are generally used to target areas where leaders want to achieve near-term performance improvements, or to accelerate progress through focused senior leadership attention. Over the past several years, Congress and the executive branch have taken steps to improve federal performance management by requiring that agencies conduct regular data-driven review meetings. In 2010, OMB released a memorandum establishing the expectation that federal agencies would hold data-driven reviews at least once every quarter to review progress on their priority goals and assure that follow-up steps would be taken to achieve improved outcomes. The memorandum specified that discussions during these meetings were to be guided by analyses of performance data, to focus on progress toward desired outcomes, to explore why variations between targets and actual outcomes occurred, and to prompt adjustments when needed. Congress, through the passage of GPRAMA, made the expectation that agencies would hold regular data-driven reviews a statutory requirement. Specifically, GPRAMA requires that, not less than quarterly, the head of each agency and COO, with the support of the PIO, should review progress on agency priority goals (see text box). GPRAMA Requirement for Quarterly Priority Progress Reviews GPRAMA requires that, not less than quarterly, at all agencies required to develop agency priority goals, the head of the agency and Chief Operating Officer, with the support of the agency Performance Improvement Officer, shall: For each agency priority goal, review with the appropriate goal leader the progress achieved during the most recent quarter, overall trends, and the likelihood of meeting the planned level of performance; Coordinate with relevant personnel within and outside the agency that contribute to the accomplishment of each agency priority goal; Assess whether relevant organizations, program activities, regulations, policies, and other activities are contributing as planned to the agency priority goals; Categorize agency priority goals by risk of not achieving the planned level of performance; and For agency priority goals at greatest risk of not meeting the planned level of performance, identify prospects and strategies for performance improvement, including any needed changes to agency program activities, regulations, policies, or other activities. While GPRAMA established requirements for agencies to conduct the reviews, GPRAMA also required that OMB prepare guidance on the implementation of GPRAMA. In 2011, OMB released guidance for federal agencies that reinforced the requirements in GPRAMA, specified that the reviews should be held in person, and outlined the specific purposes of the data-driven review meetings, the roles and responsibilities of agency leaders involved in the review process, and how the reviews should be conducted. In 2012, OMB released updated guidance for data-driven reviews.throughout this report. GAO-13-228. and federal level who shared their experiences and lessons learned.These practices, along with additional insights on why the application of these practices is important, are noted and summarized throughout this report. Nine Leading Practices Identified by GAO That Can Be Used to Promote Successful Data-Driven Performance Reviews Reviews are conducted frequently and regularly. Leaders use data-driven reviews as a leadership strategy to drive performance improvement. Key players attend reviews to facilitate problem solving. Rigorous preparations enable meaningful performance discussions. There is capacity to collect accurate, useful, and timely performance data. Staff have skills to analyze and clearly communicate complex data for decision making. Reviews ensure alignment between goals, program activities, and resources. Leaders hold managers accountable for diagnosing performance problems and identifying strategies for improvement. Participants engage in rigorous and sustained follow-up on issues identified during reviews. Taken together, the GPRAMA requirements, OMB guidance, and leading practices identify the elements necessary to carry out effective data- driven reviews: (1) those that are used to engage agency leaders in the rigorous assessment of agency performance; (2) support faster and better informed responses to identified performance problems; (3) improve communication and collaboration across an agency; and (4) enhance individual and collective accountability for improving progress toward agency goals. GPRAMA Requirement: Quarterly Reviews GPRAMA requires that agency leaders conduct reviews on progress toward agency priority goals (APGs) not less than quarterly. OMB Guidance: Quarterly Reviews OMB guidance directs agency leaders to run data-driven performance reviews on each of their APGs at least quarterly. This guidance also stresses that reviews should be conducted in person, as significant experience in federal agencies, states, localities, and other countries demonstrates that in-person engagement of senior leaders greatly accelerates learning and performance improvement. Leading Practice for Data-Driven Reviews: Frequency and Regularity Data-driven review meetings should be frequent and regularly scheduled. Data-driven performance review meetings that are held frequently and regularly help foster a culture of active and ongoing performance management, problem solving, and continuous improvement. As OMB has noted, the purpose of conducting performance reviews at least quarterly is to ensure that agency leaders regularly review agency performance on top priorities, along with the short- and long-term actions agencies are taking to improve performance, and bring together the people, resources, and analysis needed to drive progress on priority goals. Of the 23 CFO Act agencies surveyed, 20 agencies reported that they hold data-driven review meetings at least quarterly, with some agencies holding them more frequently. See table 2 for a summary of the frequency with which each agency holds in-person review meetings to, among other things, review progress on APGs. As shown in the table, the Department of Homeland Security (DHS) does not hold the required in-person review meetings. The five case-study agencies we selected for more in-depth review – the Departments of Commerce (Commerce), Health and Human Services (HHS), and Transportation (DOT); General Services Administration (GSA); and Social Security Administration (SSA) – all hold in-person review meetings involving agency leaders and APG goal leaders at different frequencies. This reflects differences in leadership preferences and organizational structures and processes. See appendix II for more detailed information on the approach used by each of the five selected case-study agencies. OMB guidance is clear that reviews should be held in person to bring together senior leaders and officials involved in all levels of program delivery. This can help ensure coordination across agency silos and enable rapid decision making. This guidance states that while written communication may replace in-person review meetings in rare circumstances, it should only be a stopgap measure to continue performance reviews in a process that otherwise operates primarily in person. A few agencies, including the Department of Agriculture (USDA), DHS, and HHS, reported that they do not hold in-person reviews of progress on APGs at least quarterly as called for in GPRAMA and OMB guidance. The Department of State (State) reported that agency officials participate in one data-driven review meeting each quarter; however, each meeting is used to review progress on only one APG. Agriculture. According to USDA officials, the Deputy Secretary meets weekly with officials and staff from the Office of Budget and Program Analysis (OBPA), including the PIO, to discuss budget and regulatory issues, which also provides opportunities to discuss APG progress and performance-related issues. USDA officials also told us that written updates on APGs and performance data are provided quarterly, and that the Deputy Secretary and PIO meet as necessary to review progress toward, and discuss issues related to, specific APGs. However, these are not regularly scheduled meetings. In addition, they told us that staff from the OBPA have frequent conversations with program officials as part of the review of regulatory documents, funding availability notices, executive budget documents, and other related documents so they have not had separate, regularly scheduled meetings to discuss progress toward the APGs. However, in subsequent discussions with USDA officials, they informed us that they intend to begin holding regularly scheduled quarterly meetings led by the COO and involving senior USDA leadership, as directed by OMB in Circular A-11 guidance. Homeland Security. DHS reported that in-person review meetings ceased due to competing priorities and demands at the end of 2013, when a change in leadership brought alternative management priorities. Agency leaders continue to review written performance updates quarterly. According to a DHS official, a meeting involving the Deputy Secretary and APG goal leaders to review goal results from fiscal year 2015 is being scheduled. Health and Human Services. HHS leaders hold in-person review meetings for each APG twice a year and review APG progress two more times a year through reviews of written progress updates. Officials from HHS said that, due to the longer-term nature of the agency’s APGs, performance data that are tracked for each goal show little meaningful change from quarter to quarter, so agency officials have not considered meeting quarterly to be an effective use of participants’ time. One HHS official said that managers convene meetings with their program teams more frequently to track progress on efforts contributing to each goal. State. Each year, State holds one joint, in-person meeting to review progress on the APG to support the implementation of low emission development strategies, which it co-leads with the U.S. Agency for International Development (USAID). State also holds one meeting each year to review progress on the APG to improve consular service delivery. State officials attend three other reviews on APGs led by USAID held throughout the year. Therefore, progress on each APG is only reviewed by officials from State in an in-person review meeting once a year. A State official stated that the performance measures used to track progress on the two APGs for which the agency either leads or co-leads show little meaningful change from quarter to quarter, and believes it would not be beneficial for stakeholders to attend meetings more often than annually. The official also said, however, that the agency’s PIO reviews the data and written updates provided by APG goal leaders each quarter, and updates the Deputy Secretary. As GPRAMA requirements and OMB guidance are clear that in-person review meetings should be held at least once a quarter, and that progress on each APG should be reviewed each quarter in these meetings, the approaches of these four agencies – DHS, HHS, State, and USDA – are not consistent with requirements for the frequency and expected characteristics of reviews. Furthermore, OMB guidance states that these reviews should not be conducted through written documents, and that agency leaders should use performance review meetings as an opportunity to engage those involved in all levels of program delivery. The lack of frequent, regular, in-person review meetings could result in missed opportunities for leaders and key officials at these four agencies to have regular, in-depth discussions of performance on top agency priorities. Such meetings could also allow them to actively promote ongoing coordination and accountability, address identified challenges or problems in a timely manner, and encourage continuous improvement in agency performance and operations. As OMB guidance also clarifies, APGs are defined as a “near-term” result or achievement that agency leaders want to accomplish within approximately 24 months through focused leadership attention. While the guidance states that APGs can advance progress toward longer-term, outcome-focused strategic goals and objectives, the APGs are designed to be near-term improvements in outcomes, customer service, or efficiency. Even in those instances when new quantitative performance data are not available for review in meetings, more frequent in-person reviews still provide the opportunity to review goal leader progress in completing shorter-term milestones or initiatives contributing to progress on the goals, and promptly address any identified problems. In fact, GPRAMA and OMB guidance both state that agencies should have clearly-defined, quarterly milestones to track progress on their APGs. GPRAMA Requirement: Leadership GPRAMA requires that the agency head and Chief Operating Officer (COO) conduct reviews with the support of the Performance Improvement Officer (PIO). OMB Guidance: Leadership OMB guidance directs that the agency head and/or COO must conduct reviews with the support of the PIO and the PIO’s office. Leading Practice for Data-Driven Reviews: Leadership Agency leaders should be directly and visibly engaged in the reviews. According to OMB, significant experience at federal agencies, states, localities, and other countries demonstrates that in-person engagement of senior leaders in review meetings greatly accelerates learning and performance improvement. The personal engagement of agency leaders in the review meetings also demonstrates their commitment to improvement across the agency and, as mentioned above, facilitates coordination across agency silos and rapid decision making. As OMB has also noted, frequent, data-driven reviews also send a signal throughout the organization that agency leaders are focused on effective and efficient implementation to improve the delivery of results. GPRAMA recognized that the direct involvement of leaders is a critical factor to drive performance improvement within an agency. Thus, it requires that agency heads and COOs conduct the reviews. OMB’s guidance for agencies on how to conduct the reviews also emphasized the importance of leadership involvement in data-driven performance reviews, directing the agency head, COO, or both to conduct the review. As we have previously reported, the commitment of agency leaders to make decisions and manage programs on the basis of performance information, and inspire others to embrace such a model – which review meetings can be used to do – is critical to increase the use of performance information throughout an agency. We found through our survey that 19 of 22 agencies that held review meetings reported that the meetings were led by their agency head or COO, or jointly by the COO and PIO. See table 3 below for specific agency responses. Furthermore, we found through our survey that review meetings are used as a tool to enhance the engagement of top agency leadership in an agency’s performance management process. In fact, all 22 agencies reported that their reviews have had a positive effect on the engagement of top agency leadership in the agency’s performance management process, with 13 reporting a large positive effect. Although their exact roles varied, officials from our five selected case- study agencies reported that agency heads or COOs were actively involved in their agency’s review processes, and led the meetings through the following activities: focusing on agency priorities and directly communicating their expectations, asking questions, reinforcing individual and collective accountability, encouraging collaboration, offering assistance with problem solving or identifying available resources, and sharing perspectives from discussions with external stakeholders. For example, during the review meeting we observed at GSA, the Administrator and other leaders engaged in the discussion by challenging assumptions about the status of goal progress, asking questions about factors driving changes in specific performance measures, and encouraging goal leaders to identify areas of risk and plans for addressing challenges. Some case-study agency officials we interviewed stated that a key role of agency leaders in the meetings is to set a positive example by demonstrating their commitment to, and involvement in, agency performance management processes, and by communicating that participation in reviews is a priority. For example, at SSA, the agency head personally presided over bi-monthly review meetings and created a new office (the Office of the Chief Strategic Officer) to support expanded performance management and data analysis efforts. According to SSA officials we spoke with, in one review meeting the agency head brought focused attention from across the agency on a priority goal that was showing insufficient progress. She convened a follow-up meeting requesting that offices throughout SSA articulate how they would contribute to progress on the goal. This proved to be a useful technique for establishing a broader sense of accountability for contributions to the goal and helped identify new strategies to improve progress. Less than half of the agencies (8 of 22) reported in our survey that getting or sustaining the participation of top agency leadership in the reviews was a challenge. However, we heard of instances in which logistical challenges can make it difficult for the agency head or COO to participate in each review meeting. For example, a Department of Transportation (DOT) official reported that leadership participation at the review meetings, which are held separately with representatives of each of DOT’s 10 operating administrations each quarter, was a moderate challenge because key leaders may not be able to attend due to last- minute scheduling changes or conflicts. The agency has decided that in these situations they will continue with scheduled meetings with other leadership team members, such as the General Counsel of DOT, leading the meeting. Officials at DOT said that ensuring the review meetings are held regularly is important because it avoids logistical challenges presented by rescheduling meetings, helps minimize preparation time by ensuring staff do not have to recreate meeting materials, encourages attendance, and leads to more productive discussions because staff are assured they will have a regular opportunities to raise issues for high- level attention. USDA, the Department of Defense (DOD), and State reported in our survey that the agency head or COO does not lead meetings that are used to review progress on APGs, as specified by GPRAMA and OMB guidance. Agriculture. In its survey response and additional follow-up communication, USDA reported that the meetings between the Deputy Secretary and PIO that are held to discuss APG progress are led by the PIO, who presents information to the Deputy Secretary in these meetings. Defense. DOD reported that its APGs are reviewed in meetings of the Defense Business Council (DBC), which has responsibility for the development and review of DOD’s performance goals. DBC meetings, however, are led by the Deputy Chief Management Officer, who is the PIO of DOD, rather than the Deputy Secretary of Defense, who is the COO. According to meeting attendance lists shared by DOD, neither the agency head nor Deputy Secretary/COO lead or regularly attend these reviews. State. State reported that its PIO leads the review meeting for the one APG that State leads, and co-leads the review meeting with USAID for the one APG that is shared by the two agencies. A State official explained that the agency feels the PIO is appropriately suited for the role as leader of the review meetings as she also serves as the agency’s senior budget official. This dual role, according to the official, allows her to integrate performance with knowledge of agency resources. Neither the agency head nor the Deputy Secretary/COO lead or regularly attend these reviews. These practices, however, are not consistent with OMB guidance, which clearly states that agency heads or COOs should conduct in-person meetings used to review progress on APGs. Leading practices emphasize that having leaders actively engaged in the reviews helps ensure that participants take the reviews seriously. As OMB has similarly noted, the involvement of COOs is critical to bringing a broader set of actors together to solve problems across the organization. Therefore, because the agency head or COO does not lead review meetings at these three agencies, the review process may be viewed as less of a priority by agency officials. This could have a detrimental effect on participation in reviews. It could also reduce opportunities for top agency leaders to reinforce responsibility and accountability, and to personally communicate their priorities and perspective to agency managers and staff. GPRAMA Requirement: Participation of Priority Goal Leaders and Other Relevant Personnel GPRAMA requires that agency leaders include agency priority goal (APG) leaders in their reviews and coordinate with other relevant personnel within and outside the agency that contribute to the accomplishment of each APG. OMB Guidance: Participation of Priority Goal Leaders and Other Relevant Personnel OMB guidance reinforces this requirement by requiring that agency leaders include APG goal leaders, or their designees, in the reviews, along with, as appropriate, relevant personnel within and outside the agency who contribute to the accomplishment of each APG. Leading Practices for Data-Driven Reviews: Participation by Key Personnel Reviews should include personnel with programmatic knowledge and responsibility for the specific performance issues being discussed. In addition, the participation of officials with functional management responsibilities, such as information technology, financial management, and human capital, can facilitate problem solving by providing managers from across the agency with a forum to communicate with each other. When officials from various offices and levels of management participate in review meetings, the meetings provide opportunities to have honest, informed discussions about performance with all key players present, and facilitate collaboration and group problem solving. Officials representing their program or area of responsibility may also feel increased accountability for results when forced to report on progress in front of leadership and peers. Survey responses show that participation of PIOs in review meetings is strong, with 20 of 22 agencies reporting that PIOs always attend review meetings, and 2 reporting that their PIOs often attend. See figure 1 for reported frequency of participation in review meetings by agency leadership and other key contributors. As the highest officials dedicated to managing agency-wide performance management efforts, PIOs hold a unique position within their agencies and are key participants in the review meetings. PIOs and agency performance staff also engage in a variety of activities that directly support successful review meetings. Through discussions with agency officials and survey results, we found that responsibilities of PIOs and performance staff may include overseeing preparations for review meetings, including the collection and analysis of data, creation of presentation materials, and convening preparatory meetings; co-leading review meetings; and managing follow- up on action items identified in review meetings. Participation by APG goal leaders in review meetings is also strong. Twenty-one out of 22 agencies reported that their goal leaders always or often participate in review meetings. Through our discussions with goal leaders we learned that they also play a key role in the review meetings, and present information on progress toward goals, respond to questions from agency leaders, identify problems or challenges and propose strategies to address them, and request support or assistance. Most agencies also reported that other key officials with responsibility for agency financial management, human capital, information technology, and legal matters attend their review meetings. While there was variation across agencies on the frequency with which these officials participate in review meetings, 11 agencies reported that their Chief Financial Officers (CFO), Chief Human Capital Officers (CHCO), Chief Information Officers (CIO), Chief Acquisition Officers (CAO), and representatives from their Office of General Counsel (OGC) always or often attend the reviews. Officials from three of our five selected case-study agencies discussed the benefits of including chief officers in their reviews. These benefits include providing a cross-cutting agency perspective and specialized expertise to inform decisions, and offering assistance, resources, and problem solving support. For example, one DOT official described how the discussions in review meetings often focus on regulations that are under review by the Office of Information and Regulatory Affairs (OIRA) at OMB, which, in some instances, reviews regulations before they can be finalized. According to DOT officials, in the department’s review meetings, officials discuss the progress and plans of rulemakings with the Secretary’s office or OGC, such as facilitating early engagement with OIRA to address analytical issues. According to DOT officials, this is significant because the issuance of rules is an important tool that the department uses to promote progress toward its APGs. For instance, in its reporting on progress toward the APG to reduce the rate of roadway fatalities, DOT identified a number of proposed and final rules designed to reduce the risk of fatalities and serious injuries through enhancements to the safety of vehicles and roadways. USDA and State provided responses indicating that participation in their reviews is not fully consistent with requirements, guidance, and leading practices. Agriculture. USDA responded to our survey that APG goal leaders participate in review meetings about half of the time. Through follow-up communication, USDA officials clarified that meetings between the Deputy Secretary and PIO in which APG progress is reviewed generally do not involve goal leaders. Officials also said, however, that when the Deputy Secretary had specific questions on APG progress, the Office of Budget and Program Analysis (OBPA) would schedule a follow-up meeting attended by the Deputy Secretary, PIO, goal leaders, and, occasionally, performance staff. USDA officials also reported that their CFO, CHCO, CIO, and CAO are rarely involved in the meetings, and that the General Counsel never attends. In subsequent follow-up communication, USDA officials stated that if additional information or action is needed from administrative, program, or policy officials, then the PIO and OBPA staff will act as a liaison, relaying questions and information between these officials and the Deputy Secretary. USDA officials said that given USDA’s large size and the complex and diverse nature of its multiple missions, it is generally easier logistically to have the PIO meet with the Deputy Secretary, rather than trying to schedule a meeting involving additional senior officials. State. State responded to our survey that the CFO, CHCO, CIO, CAO, and General Counsel never attend review meetings. Upon subsequent follow-up with State officials, they could not provide an example of when these officials had been invited or attended review meetings, but said that they plan to invite officials with functional management responsibilities as appropriate in the future. Not involving APG goal leaders in regular reviews of goal progress is inconsistent with GPRAMA requirements and OMB guidance; by not doing so, USDA may be missing opportunities for direct communication between agency leaders and relevant program staff about progress, challenges, and strategies for improvement. In addition, by not regularly including officials with functional management or legal expertise, as leading practices suggest, USDA and State may also miss opportunities to address performance issues in which human capital, information technology, acquisitions, or legal expertise could play a significant role in the development of solutions. As we reported in our earlier evaluation of agency performance review meetings, OMB guidance and leading practices indicate that including key players from other agencies can lead to more effective collaboration and goal achievement. Specifically, OMB guidance states that agencies should include, as appropriate, relevant personnel from outside the agency who contribute to the accomplishment of an APG or other priority. When key players are excluded from performance reviews, agencies may miss opportunities to have all the relevant parties apply their knowledge of the issues and participate in developing solutions to performance problems. Instead, agencies will need to rely on potentially duplicative parallel coordination mechanisms, which could result in less than optimal performance improvement strategies. Only two agencies, State and USAID, reported that they always or often include officials from outside the agency in their review meetings. These are also the two agencies that hold joint sessions to review progress on their shared APG. Most agencies, however, reported that external contributors never participate in their reviews. In February 2013, we recommended that OMB work with the PIC and other relevant groups to identify and share promising practices to help agencies extend their performance reviews to include, as relevant, representatives from outside organizations that contribute to achieving their agency performance goals. OMB generally concurred with the recommendation, and in July 2014, staff from OMB and the PIC told us that meetings of the PIC Internal Reviews Working Group have been used to discuss the inclusion of representatives from external organizations in agency reviews, as appropriate. In March 2015, OMB staff said that while they have found that at times it is useful to engage external stakeholders in improving program delivery, agencies view reviews as internal agency management meetings. Thus, they believe it would not always be appropriate to regularly include external representatives. According to PIC staff, the PIC continues to work with agencies to identify examples where agencies have included representatives from outside organizations in quarterly reviews, and to identify promising practices based on those experiences. As those promising practices are identified, PIC staff plan to disseminate them through the PIC Internal Reviews Working Group and other venues. We will continue to monitor these efforts and periodically report on their status. GPRAMA Requirement: Review of Quarterly and Trend Data on Priority Goal Progress GPRAMA requires that participants review, for each APG, progress achieved during the most recent quarter, overall trend data, and the likelihood of meeting the planned level of performance. OMB Guidance: Review of Quarterly and Trend Data on Priority Goal Progress OMB guidance reinforces this requirement by directing participants to review progress achieved during the most recent quarter, overall trend data, and the likelihood of meeting the planned level of performance. It also says that, in the reviews, agency leaders should hold goal leaders accountable for knowing the quality of their data, for having a plan to improve it if necessary, and for filling critical evidence or other information gaps. Leading Practice for Data-Driven Reviews: Collecting and Analyzing Performance Data Participants in a data-driven review meeting must have up-to-date, accurate data on performance to have a meaningful discussion about progress toward goals and milestones. The capacity to collect relevant and timely data and the ability to analyze it to identify key trends, areas of strong or weak performance, and possible causal factors are critical to successful reviews. As we have previously reported, the capacity to collect and analyze accurate and useful data is critical to successful data-driven reviews. The collection and analysis of valid, up-to-date performance data in advance of data-driven review meetings is necessary to ensure that the most timely data and information are used to inform discussions in meetings, and that key trends or areas of strong or weak performance have been identified. The collection and analysis of up-to-date data for review meetings is also necessary because GPRAMA and OMB guidance require that reviews be used to review progress toward APGs. All 22 agencies reported that they always or often collect data on APG performance measures and milestones in advance of their review meetings. Furthermore, all 22 agencies reported that they always or often analyze these data to identify key performance trends or patterns and areas of strong or weak performance. See figure 2 below for information on the frequency with which agencies reported that they take specific data collection and analysis actions prior to their review meetings. All five of our selected case-study agencies established processes for collecting and analyzing performance data in advance of their review meetings. At each of the agencies, officials told us that those managing the preparation for review meetings collect updated performance data from goal leaders and their staffs. Some agencies used a standardized template to collect and organize the performance data and other relevant information about progress toward goals and milestones such as risks, challenges, and future actions. GSA also used an online spreadsheet that offices were required to regularly update with new information on progress toward specific agency goals or milestones. See figure 3 for a screenshot of this spreadsheet. Collecting accurate and timely data is critical for successful performance reviews, but our survey found that 19 of 22 agencies identified this as a challenge. This finding is consistent with our previous survey of agency PIOs, as well as past surveys conducted by the PIC, which found that the primary challenges agencies faced when implementing reviews included access to data and limitations in the capability of their data systems. It appears, however, that the attention and scrutiny data receive through the review process can help agencies identify and address problems or limitations. In fact, 20 of 22 agencies reported that their reviews have had a positive impact on the quality of the performance data used to track progress and inform decision making within their agencies. According to OMB staff, in February 2015, OMB and the PIC also formed a cross- agency working group on data quality comprised of agency and OMB representatives. The stated objectives of the working group, which will meet through August 2015, are to identify guidelines and practices that would improve the reliability and quality of performance data and the reporting process, and establish standards and consistency across the federal government. We are assessing the quality of publicly reported information on APGs in selected agencies and plan to discuss this cross- agency working group in more detail in an upcoming report in the summer of 2015. Officials from some of our five selected case-study agencies described the actions they have taken to address challenges presented by lagging or limited performance data. For example, to track progress on the reduction of improper payments, the goal leader for SSA’s improper payments APG previously received relevant data annually. To increase the frequency with which new data are available, the PIO initiated a conversation on increasing the frequency with which payment accuracy data is received and the goal leader worked with the SSA Office of Quality Review, which collects the data, to increase the frequency to every 6 months. An SSA official said that having more current data has given the agency a better indication, at an earlier stage, of its progress in a given year and of any impacts its actions may be having. At HHS, a key indicator tracked for the HHS early childhood education APG is the number of states with Quality Rating and Improvement Systems that meet seven benchmarks. The data for this indicator, however, are only available annually. Because more frequently updated data are not available, HHS officials asked the goal leader to disaggregate the data by geographic region to allow for a more granular examination of conditions and trends across regions. The Office of Child Care analyzed state progress toward implementing Quality Rating and Improvement Systems that met the seven benchmarks. As part of the analysis, the Office of Child Care identified the most common gaps in the states, and created a map that provided a visual representation of state progress toward the goal. Officials from two of our five selected case-study agencies also reported specific challenges related to their capacity to perform data analysis to inform performance management. As we have previously reported, this helps ensure performance information is analyzed and communicated For example, an SSA official effectively, and used in a meaningful way.said that the agency had insufficient analytical capacity to perform deep, detailed analysis of data on the use of SSA services and the relationships between the use of these services and other factors. To address this limitation, SSA created an Office of Performance Management and Business Analytics to collaborate with other SSA offices to gather and analyze agency data, and to perform complex data analyses. SSA also facilitates initiatives like an internal training program where senior data analysts train other staff. The agency is also seeking to hire an advanced data scientist. In April 2013, we reported on the importance of ensuring that agency performance management staff have sufficient capacity to support performance management in federal agencies, and recommended that the Director of the Office of Personnel Management (OPM), in coordination with the PIC and the Chief Learning Officer Council, work with agencies to identify competency areas needing improvement within agencies, and identify training that focuses on needed performance OPM and OMB staff agreed with this management competencies.recommendation. In July 2014, OPM told us that it had coordinated with the PIC on this recommendation, and that the PIC would take responsibility for the remaining actions needed to implement this recommendation. In March 2015, OMB and PIC staff said that the PIC has created a number of training programs designed to provide agency officials with information about performance management, and approaches for using performance management to improve agency performance. The PIC has also created a public website, LearnPerformance.gov, with informational resources on a range of topics, including measurement, data and analysis, and reporting and communicating performance information. We will continue to monitor these efforts as training and other knowledge sharing efforts are implemented and expanded. GPRAMA Requirement: Identifying “At Risk” Goals GPRAMA requires that agencies categorize agency priority goals (APGs) by risk of not achieving the planned level of performance. OMB Guidance: Identifying “At Risk” Goals OMB guidance also directs agencies to identify APGs (or other priorities) at risk of not achieving the planned level of performance and work with goal leaders to identify strategies that support performance improvement. It also directs them to review variations in performance trends across the organization and delivery partners, identify possible reasons for the variance, and understand whether the variance points to promising practices or problems needing greater attention. Leading Practice for Data-Driven Reviews: Rigorous Preparation Rigorous preparation is critical for effective performance reviews, as key participants must be prepared to discuss issues related to their performance and progress toward goals. After data have been collected and analyzed, they must be effectively communicated to participants. Following the completion of data collection and analysis, offices responsible for supporting reviews will often compile summary materials to help leaders and participants prepare for the reviews. These efforts to ensure that participants are aware of the status of goals and milestones, and key questions likely to be raised and discussed in the meetings, can also be critical to the success of reviews. As we have also previously reported, frequent and regular communication of performance information is also critical to remind agency officials of their commitment to achieve the agency’s goals, and to keep those goals in mind as they pursue their day-to-day activities. It also helps ensure that leaders and managers have opportunities to review information in time to take action to make improvements. All 22 agencies reported that they always or often develop presentation slides or other meeting materials to communicate key data and analyses to participants. Furthermore, all 22 agencies also reported that they always or often distribute these materials to participants for review before the meetings. See figure 4 below for information on how frequently agencies report they take specific actions to prepare for review meetings. All five of our selected case-study agencies developed presentation slides, or other meeting materials, and distributed them to participants in advance of their review meetings. In addition to presenting information on progress toward agency goals and milestones, meeting materials may also include discussions of key strategies and initiatives being employed to influence progress, and any risks, challenges, or opportunities those managing the goals are facing. In accordance with the GPRAMA and OMB requirement that agencies categorize APGs by risk of not achieving the planned level of performance, materials produced for meetings at all five of our selected agencies included information or color-coded graphics to indicate the likelihood a goal will be achieved and whether a goal is “off track” or “at risk.” For two examples of materials prepared for review meetings at SSA and HHS, see interactive figures 5 and 6. Fourteen of 22 agencies reported that they always or often held a preparatory session to review the agenda, data, and key discussion points with participants before their review meetings. Officials from our five selected case-study agencies described preparatory meetings that officials from their agencies hold in advance of review meetings. Officials from some of the five agencies also described how these preparatory sessions can be valuable, as they allow agency leaders and goal leaders to familiarize themselves with the data and discuss responses to potential questions with knowledgeable staff. General Services Administration. Two of the agency’s bureaus, the Public Building Service and the Federal Acquisition Service, hold regular meetings where managers from each service review and discuss performance data presented later at the agency-level performance review meetings. Officials see these meetings as not only preparation for the agency-level review meetings, but as vital to effectively managing the business of the services and making progress toward identified goals. Social Security Administration. To prepare for the quarterly review meetings with the Acting Commissioner of SSA, the Chief Strategic Officer/PIO meets with SSA’s deputy commissioners, goal leaders, and appropriate staff to discuss progress toward APGs, the status of efforts being employed to achieve them, and the order in which goals should be discussed in the quarterly review. This preparatory meeting is held 10 days before the quarterly review meeting is scheduled to be held. Five days before the quarterly review meeting, the Chief Strategic Officer/PIO meets with the Acting Commissioner to prepare for the quarterly meeting. At this meeting, they discuss goal progress and trends, issues to be discussed during the review meeting, and potential questions the Acting Commissioner could ask. Materials prepared by the APG goal teams for the quarterly review are sent to the Acting Commissioner 48 hours in advance of this preparatory meeting. Transportation. An official from the Federal Railroad Administration (FRA) explained that, prior to FRA’s review meetings with the Deputy Secretary of Transportation, staff hold briefings for the FRA Administrator and each Associate Administrator. This official said that these preparatory meetings for FRA leadership are valuable because they allow FRA leadership to ask questions of knowledgeable staff to better understand the data and information they will ultimately present at the review meeting with DOT leadership. GPRAMA Requirement: Review of Priority Goal Progress and Identification of Improvements GPRAMA requires that agency leaders review progress on agency priority goals (APGs); assess whether relevant organizations, programs, regulations, and policies are contributing as planned; and identify strategies for performance improvement for those goals at greatest risk of not achieving their planned levels of performance. OMB Guidance: Review of Priority Goal Progress and Identification of Improvements OMB guidance reinforces this requirement by directing agency leaders to use in-person review meetings to review progress on APGs; hold goal leaders accountable for knowing whether their performance indicators are trending in the right direction and, if not, having a plan to accelerate progress on the goal; identify APGs or other priorities at risk of not achieving planned levels of performance; and work with goal leaders to identify strategies that support improvement. Leading Practice for Data-Driven Reviews: Accountability Agency leaders should use review meetings to hold goal leaders and other responsible managers accountable for knowing the progress being made in achieving goals and, if progress is insufficient, understanding why and having a plan for improvement. As mentioned throughout this report, a fundamental purpose of data- driven review meetings is to provide a mechanism for agency leaders to assess an agency’s progress on key goals and milestones; analyze and discuss data to identify goals at risk, performance problems, and improvement opportunities; and ensure that goal contributors are held accountable for their performance. Through our survey, we found that most agencies reported they always or often use their review meetings to assess progress and contributions, and identify goals at risk. However, as shown in figure 7 below, there is some variation reported across the 22 agencies on the frequency of specific types of actions taken during review meetings. Assessing progress on APGs. Reviewing progress on APGs on a regular and ongoing basis is a key requirement of GPRAMA, and helps ensure that agency leaders, goal leaders, and other contributors have frequent opportunities to review recent progress and trends. Twenty of 22 agencies reported that their data-driven review meetings are always or often used to review progress on APGs, including recent progress, overall trends, and the status of related milestones. Analyzing data to identify goals at risk and hold goal leaders accountable. GPRAMA also requires that agencies identify and categorize goals at risk of not achieving the planned level of performance. Twenty-one of 22 agency PIOs reported that their review meetings are always or often used to identify goals at risk, and to hold goal leaders accountable for explaining why the goal is at risk, as well as strategies for performance improvement. Discussing contributions of program activities, policies, and regulations and whether they should be changed to improve their impact on priority goals. Assessing the contributions that organizations, program activities, policies, and regulations are making toward the achievement of goals is another critical part of efforts to use review meetings to ensure accountability for the completion of commitments, and to identify potential problems, effective practices, or strategies for improvement. GPRAMA requires that agencies include these assessments as part of their reviews. Twenty of 22 agencies reported always or often discussing whether specific organizations or program activities were contributing as planned to priority goals, and 18 of 22 reported always or often discussing the contributions of relevant policies toward priority goals. In this way, data-driven review meetings can be used to reinforce the alignment of higher-level agency goals with the milestones and day-to-day activities of program officials contributing to each goal. However, only 13 of 22 agencies reported always or often discussing whether program activities, policies, and regulations should be changed to improve their alignment with priority goals. SSA officials described how they have used their review meetings to discuss the contributions of programs, policies, and regulations, and necessary changes. For example, SSA has an APG to expand the use of video technology to hold benefit determination hearings. According to SSA officials, initial discussions on this goal in quarterly review meetings identified the challenge that those scheduled for video hearings were opting out at the last minute, which led to unpredictable schedules and down time for Administrative Law Judges. To address this issue and help achieve the broader goal to expand the use of video hearings, the agency determined that a regulatory change was needed to require claimants to decline a video hearing within 30 days after the date the claimant receives notice that the agency may schedule the claimant to appear at a hearing by video teleconferencing. This change was designed to decrease last-minute hearing cancellations and help them more efficiently schedule video hearings. The milestones that were developed to track progress on the development and implementation of the regulatory change were regularly discussed in the quarterly meetings, which is one of the factors that led to the agency working with OMB to expedite the release of the regulation. SSA believes that this June 2014 regulatory change will have long-term benefits. However, SSA has acknowledged that in the short term they may receive more opt-outs due to the 30-day notice requirement. For this reason, they are tracking the opt-out rate for video hearings to measure the impact of the new regulation, and are reviewing these data in their quarterly meetings. In the quarterly review meeting we observed after the regulation had been implemented, participants discussed several potential consequences of the regulation, including the potential for an increase in the opt-out rate, and possible strategies for addressing them, such as additional regulatory or process changes. Some agencies reported through our survey that they take certain actions during review meetings about half of the time or less frequently. For example, the Department of Labor (Labor) reported that it reviews APG progress about half the time in review meetings. However, a Labor official explained through follow-up communication that it responded this way because each quarter it holds performance review meetings for each of its 16 components and not all components have responsibility for one or more of the APGs. Therefore, APG progress is discussed only during review meetings for components that contribute to APGs. The agency specified, however, that each quarter it reviews progress on each of its APGs. Three agencies—the Departments of Energy (Energy) and Health and Human Services, and the National Science Foundation (NSF)—reported that they rarely discuss whether program activities, policies, and regulations should be changed to improve their alignment with priority goals in review meetings. Two other agencies—the Department of Defense (DOD) and the National Aeronautics and Space Administration (NASA)—reported that they never hold these discussions. Through follow-up, officials from Energy, HHS, NASA, and NSF clarified their responses to this question and explained how their review meetings were used to identify and discuss weaknesses or risks that could impact the achievement of their goals, and discuss suggestions for improvement. Energy. While Energy’s quarterly review meetings were used to discuss APGs identified as “off track,” and review future plans and milestones for these goals, an agency official said that other goal- specific meetings were held to drive action on at-risk goals. The quarterly review meetings also served as an opportunity for senior leaders not involved in the other meetings to discuss goal progress and actions being taken to improve efforts in those areas that are off track. Health and Human Services. An HHS official stated that the agency’s response was due mainly to the fact that review meetings are generally not used to discuss regulatory changes, with some exceptions, such as reviews held for the health information technology APG. Instead, the official said that discussions in HHS review meetings are focused primarily on improving progress on APGs through better implementation and execution of program activities and other management initiatives. HHS leaders that attend the review meetings, however, may use the information gained to inform decisions on longer-term policy or regulatory changes. NASA. A NASA official stated that the agency’s APGs are closely aligned with specific agency programs and projects, and that monthly data-driven performance review meetings are used to discuss potential cost, schedule, technical, and programmatic risks to meeting their milestones, as well as strategies for improving performance. NASA does not, however, discuss realigning or changing programs or policies to meet those milestones. For example, in their quarterly reviews of the James Webb Space Telescope program, participants have had discussions of actions the program will undertake to meet its milestones, but not of changing the program, or of reassigning this work to a different agency program, as the program is the only one with the capability to implement the work. National Science Foundation. NSF officials stated that they have no relevant regulations to discuss in the agency’s review meetings, but NSF officials do discuss potential changes to program activities and policies at review meetings as the need for program or policy changes become apparent, which is generally about half of the time. For example, NSF has an APG to improve the nation’s capacity in data science by focusing NSF investments in human capital, partnerships, and infrastructure that support data science. Initial plans for this APG were set at the time the goal was established and expressed as a series of quarterly milestones. The timing of achievement of these milestones is occasionally altered, and NSF review meetings have been used to discuss these changes. In one such recent change, NSF officials originally planned to support Big Data Regional Innovation Hubs in fiscal year 2014, but decided to gather more community input to increase the specificity and quality of its proposals. Request for Comments was published in the Federal Register, with a public comment period ending November 1, 2014.According to an NSF official, the submissions to the request were used to refine the solicitation for Big Data Regional Innovation Hubs that was subsequently released in the second quarter of fiscal year 2015. These timing changes were presented at each review meeting and discussed as necessary. Big Data Regional Innovation Hubs are designed to be consortiums of members from academia, industry, and government that would foster collaboration amongst partners, and focus on key Big Data challenges and opportunities in their regions of service. our survey that participants review progress on APGs only about half the time, rarely identify goals at risk, and never discuss whether program activities, policies, or regulations should be changed to improve their alignment with priority goals. This is consistent with our own review of documentation from DBC meetings, which indicated that a review of APGs was not always included on the agenda. In those instances when APG progress was reviewed, the information on APG progress included in meeting materials was limited. For instance, materials prepared for some meetings had only one slide with an aggregate count of how many APGs were on or off track, and limited information on the status of individual APGs. Without information on the status of individual APGs, DOD’s review meetings are unlikely to foster meaningful discussions about progress and trends. Further, if these review meetings are not regularly used to assess progress on individual APGs, and to identify at- risk goals and potential improvements, it could mean missed opportunities for DOD to address performance problems or accelerate progress. DOD officials informed us, however, that over the next year, they plan to revise their review process to ensure they conduct regular, quarterly reviews of APG progress that involve discussions on progress achieved in the most recent quarter, performance trends, and status of related milestones; discussions of potential organization, program activity, policy, or regulatory changes to improve alignment with, and impact on, priority goals; and the identification of at-risk goals. OMB Guidance: Follow-Up OMB guidance directs agency leaders to agree on follow-up actions at each review meeting and track timely follow-through. Leading Practice for Data-Driven Reviews: Follow-Up Rigorous and sustained follow-up on issues identified during meetings, including the identification of the individual or office responsible for each follow-up action, is critical to ensure the success of reviews as a performance improvement tool. Identifying and agreeing upon actions that need to be taken following a review meeting, and rigorously tracking the status of these actions to completion, is a key element of OMB guidance as well as a leading practice. Rigorous follow-up is also critical to the overall success of reviews as a tool for addressing identified deficiencies and improving performance. According to our survey results, most agencies reported that they are generally taking steps to identify and follow up on action items identified in review meetings. However, this is an area where our survey indicated there is less consistency in how frequently agencies are employing specific practices. Figure 8 shows the frequency with which agencies reported conducting specific follow-up actions. The variation in how systematically agencies identify and follow-up on action items from review meetings is also illustrated by the different approaches that our five selected case-study agencies reported using to identify and follow-up on action items, which are described in table 4. The analysis of responses to our survey indicated that there is a statistically significant, positive correlation between the frequency with which an agency identifies and agrees on specific follow-up actions and the perceived impact of review meetings on performance improvement. Specifically, as shown in figure 9, all 13 agencies that reported that their review meetings have had a major impact on performance improvement also always or often identify and agree on follow-up actions during review meetings. Agencies that reported their review meetings have had a minor impact on performance improvement reported identifying and agreeing on follow-up actions during review meetings less frequently. Similarly, our analysis found that a statistically significant, positive correlation exists between the frequency with which an agency uses its review meetings to review the status of follow-up actions from the previous meeting, and the perceived impact those reviews have on performance improvement. These findings are consistent with surveys of agency PIOs administered in the past by the PIC. These surveys found that agencies where reviews have had a major impact on agency performance are more likely to document specific action items with clear owners and due dates, and review follow-up actions from previous meetings. While OMB guidance and leading practices are clear that participants in each review meeting should agree on follow-up actions and track follow- through, four agencies – DOD, Energy, NSF, and the Small Business Administration (SBA) – reported through our survey that they identify and agree on specific follow-up actions about half the time or less frequently. Through follow-up with Energy, SBA, and NSF, officials from those agencies further explained the actions they are taking, or have taken, to identify, document, and track follow-up items, are consistent with OMB guidance. Energy. Energy reported through our survey that participants identify and agree on specific follow-up actions in quarterly review meetings about half of the time. According to an Energy official, however, in instances where follow-up actions are identified, those items are documented in a “Summary of Actions.” In addition to using quarterly review meetings to identify follow-up actions, the official stated that other topic-specific meetings are used to identify and address follow-up items for specific APGs. For example, the Summary of Actions from Energy’s August 2014 quarterly review meeting indicated that the Deputy Secretary would hold a meeting with a specific agency official to review the off-track status of a priority goal in more detail. Small Business Administration. SBA reported through our survey that participants would rarely identify and agree on specific follow-up actions to be taken after meetings. During the course of our review, however, SBA officials instituted changes to the agency’s review processes as a result of new leadership, and have given the SBA Office of Performance Management responsibility for ensuring that all action items from their review meetings, as well as “key takeaways” for discussion at the next review, are recorded. National Science Foundation. NSF reported through our survey that participants rarely identify and agree on specific follow-up actions. However, NSF officials stated that they chose this response because their goals are based primarily on the achievement of milestones and goal teams have already outlined the specific actions they will be taking in goal documentation. The status of actions to complete each of these milestones is then reviewed in each review meeting. NSF officials also said that in the event a follow-up action or course correction is identified in a quarterly meeting, the status of these actions will be discussed in bi-weekly meetings between the PIO and COO, who determine whether the actions have been adequately addressed or whether additional steps are required. In contrast, DOD reported through our survey that participants in review meetings rarely identify and agree on specific follow-up actions. After subsequent follow-up with the agency, we found that DOD practices are not consistent with OMB guidance or leading practices. Through our review of documents from DOD review meetings, we also found there was no information included in materials prepared before, or after, these meetings to indicate that they are used to identify follow-up actions related to APGs. In our follow-up communication with them, DOD officials acknowledged the need to regularly identify follow-up actions, and informed us that over the next year they plan to integrate the identification of specific follow-up actions into their reviews. Clearly identifying and documenting follow-up items, identifying the individual or office responsible, and monitoring their status are important to ensure that agreed upon actions are taken after DOD’s review meetings. This is supported by the results of our analysis, which showed that systematically identifying and following up on action items is associated with review meetings having a greater impact as a performance improvement tool. Furthermore, a failure to clearly identify and document follow-up actions may lead to a situation at DOD in which there is no commonly-held list of specific actions that will be taken after review meetings, and a limited ability to hold accountable those responsible for the completion of action items. The results of our survey on agency data-driven review practices indicate that review meetings have had positive effects on progress toward agency goals, collaboration between agency officials, the ability to hold agency officials accountable for progress toward goals, and the ability to identify opportunities to improve agency operations. COOs, PIOs, APG goal leaders, and staff that we spoke with at the five selected agencies reinforced these findings, and also shared examples that illustrate the positive effects their data-driven review meetings are having in these areas. Nearly all agencies reported that their data-driven review meetings have had a positive effect on progress toward the achievement of agency goals, and on their ability to identify and mitigate risks to goal achievement. As illustrated in figure 10, all 22 agencies reported that their reviews have had a positive effect on progress toward their APGs, and 21 of 22 reported that their reviews have had a positive effect on their agency’s ability to identify and mitigate risks to achieving priority goals. In our discussions with officials from selected agencies, data-driven review meetings were described as venues for agency leaders and managers to assess progress toward key goals and milestones, the status of ongoing initiatives and planned actions, potential solutions for problems or challenges hindering progress, and additional support or resources needed to improve performance. Agency officials emphasized that discussions in their review meetings tend to focus on those goals or issues most in need of attention, where the achievement of a goal or milestone is at risk. In this way, reviews can serve as early warning systems and facilitate focused discussions on external, technical, or operational obstacles that may be hindering progress, and the specific actions that should be taken to overcome them. For example, SSA has an APG to increase the number of registrations for its my Social Security portal by 15 percent per year in fiscal years 2014 and 2015. In 2014, however, through the review of data for SSA’s third quarter review meeting, it became apparent to SSA leadership that the agency was not on track to achieve its target for this goal. According to officials, as part of the quarterly review process agency officials completed a more thorough examination of reasons for this and found that the agency would not be able to complete the development of additional features, such as the ability to request a replacement Social Security card, which were expected to drive higher volumes of traffic to the portal. Understanding these limitations, SSA’s focus shifted to what could be done by offices throughout the agency, using currently available or attainable resources and technology, to support efforts to increase the number of registrations. To achieve this, SSA leadership had different offices within the agency, including Communications, Policy, and Budget, specify the contributions they would make to help increase the number of registrations. For example, the Office of Communications developed a document outlining 26 activities the office was taking, or planned to take, to promote my Social Security to potential users. Since then, the agency’s quarterly review meetings have been used to review and reinforce the commitments each office made. In the quarterly review meeting that we observed, a representative of SSA’s Communications office emphasized that supporting efforts to increase my Social Security registrations is the office’s top priority, and discussed an ongoing national marketing campaign, and marketing activities targeted to advocates in the aging and disability communities and third party tax preparers. While SSA was unable to meet the registration goal for fiscal year 2014, according to SSA officials, these efforts recently undertaken as a result of the review process have helped generate an increase in registrations. Data from SSA’s fiscal year 2015 first quarter review show that there was a 46 percent increase in new account registrations in October 2014 compared to the number of new registrations in October 2013, and a 26 percent increase in December 2014 relative to December 2013. Many agencies reported that they are also using their review meetings to review progress on a broader suite of performance goals that go beyond the requirement to review APGs. Nineteen of 22 agencies reported that they always or often discuss progress on agency-wide goals or initiatives beyond the APGs in their review meetings, while 20 of 22 agencies reported that reviews have had a positive effect on their progress toward the achievement of other performance goals. For example, according to a GSA official, a long-standing challenge of the Public Buildings Service (PBS) has been finalizing occupancy agreements in a timely fashion.2014, agency leadership made improving performance in this area a specific goal for PBS, which was then often discussed during GSA’s review meetings. According to GSA officials, due to the increased attention on the status of goal progress and leadership commitment to improving performance, the agency has surpassed its goals in this area. According to GSA’s performance report, in fiscal year 2014, the agency improved the on-time activation of occupancy agreements in owned space to 98 percent and leased space to 90 percent, exceeding the targets of 90 percent in owned space and 82 percent in leased space. This is also an improvement from the on-time activation rates of 86 In percent for owned space and 75 percent for leased space in fiscal year 2013. Twenty-one of 22 agencies reported that their data-driven reviews have had a positive effect on collaboration between officials from different offices or programs within the agency. Similarly, agency officials with whom we spoke emphasized that review meetings provide opportunities to bring together the people, analytical insights, and resources from across an agency that are needed to improve progress on agency priorities and to address any identified performance problems or challenges. As we heard from agency officials, and summarized in figure 11, bringing leaders and officials from across an agency together regularly to focus on shared goals and milestones can establish a shared sense of purpose, encourage ongoing collaboration, and reduce organizational silos. The review meetings also serve as action-forcing events that provide an opportunity for officials from across an agency to develop and implement collaborative solutions to identified problems. These insights into the positive effects review meetings can have on collaboration within agencies also reinforce their potential value as a tool for promoting increased collaboration across agencies. As noted above, this should encourage those who lead and manage agency reviews to follow OMB guidance on the issue and be mindful of opportunities to leverage reviews to involve relevant stakeholders from external agencies or organizations. Department of Health and Human Services (HHS) officials reported that promoting collaboration between different offices that contribute to individual APGs has been one of the most important effects of their reviews. They also emphasized that reviews have been used to bring APG contributors from across HHS together to discuss what can collectively be done to support progress on the goals. For example, HHS has an APG to increase the number of eligible providers who receive incentive payments for the successful adoption or demonstration of meaningful use of certified electronic health record (EHR) technology. According to HHS officials, the goal requires a great deal of coordination between the Office of the National Coordinator for Health IT (ONC) and the Centers for Medicare & Medicaid Services (CMS). An HHS official involved in these efforts explained that the two offices realize that, given their shared ownership of the goal, they are expected to coordinate effectively, and the reviews are used to reinforce this expectation and ensure that ongoing coordination is occurring. One way this has manifested itself is in the improved data sharing arrangement that now exists between ONC and CMS. Under this arrangement, CMS collects data related to the Medicare and Medicaid EHR Incentive Programs, which provide financial incentives for the “meaningful use” of certified EHR technology by health care providers. According to ONC officials, the review process has helped encourage more regular data sharing between CMS and ONC, with ONC receiving monthly updates on EHR Incentive Program registration, attestation, and payment data from CMS. The team supporting the priority goal is now using these data to conduct ongoing evaluations of the characteristics of providers at different stages in the program. According to an ONC official, data on program participation are also shared by ONC and CMS during monthly presentations to the Federal Advisory Committees on Health IT, which illustrates how they are also using these data to facilitate partnerships and public-facing discussions with other stakeholders. Similarly, officials from GSA stated that they believe that the most significant effect of the agency’s review meetings has been to enable collaborative problem solving, where ideas for potential solutions can be freely shared, and officials can request assistance from their colleagues in offices throughout GSA. For example, as part of a 2013 reorganization, the Office of Administrative Services (OAS) was given responsibility for the management of GSA-occupied real estate. According to GSA officials, through review meetings agency leaders identified that regional offices were estimating project costs and footprints using different assumptions. OAS was able to partner with PBS, which had experience working with other agencies to estimate project costs and footprints, to leverage their experience in the consideration of internal agency real estate needs. This led to the creation of a new streamlined process for project plans, with the head of OAS reviewing all plans to ensure they conform to consistent standards. Agency leaders said this coordination likely would not have happened if the two offices had remained siloed in their approaches, as they were before they began coordinating in GSA’s regular review meetings. Having regular review meetings that require goal leaders and other contributors to report out on their progress, and respond to direct questions about the actions that they are taking, provide leaders with important opportunities to clarify and reinforce responsibilities, motivate actions necessary to complete milestones or improve performance, and hold goal leaders and managers accountable. The involvement of agency leaders in review meetings also helps ensure that the meetings are taken seriously and viewed as a priority. Twenty-one of 22 agencies reported that their data-driven reviews have had a positive effect on their agency’s ability to hold goal leaders and other officials accountable for progress towards goals and milestones.According to officials from selected agencies, the transparency of performance information, and a review process that ensures it receives appropriate scrutiny, produces an increased sense of accountability for results. Several agency officials emphasized that it is important for those leading the meetings to establish a constructive, solution-oriented environment in which officials can be open and honest about their progress, and any problems and challenges they are facing. This outlook is also reinforced by OMB guidance, which directs agency leaders to establish an environment that promotes learning and openly sharing successes and challenges. At the same time, while agency leaders must maintain an environment in which participants feel comfortable raising problems and challenges, officials we spoke to also emphasized that leaders must use the meetings to hold goal leaders and managers accountable for having well-thought-out strategies for how to overcome them or mitigate their impact. Leaders from Commerce said that they are using their regular review meetings with bureau heads and goal leaders to support a cultural change throughout the agency and to reinforce accountability for performance at multiple levels of the organization. This change is one that emphasizes regular and ongoing reviews of performance, accountability for the completion of action items, more frequent and regular follow-up through review meetings, and an increased urgency and pace of implementation. For example, in 2014, Commerce established an APG designed to increase the percentage of companies assisted by the Global Markets (GM) program that achieve their export objectives.Commerce officials, the decision to focus on this new measure, which places the focus on the satisfaction of clients, came out of input from GM clients and staff, as well as discussions in departmental review meetings about the need to clarify the objectives of the program and improve the quality of assistance that trade and commercial specialists provide. The key measure used to track progress on the priority goal was designed to help drive changes in internal processes and behaviors by focusing more clearly on activities and outcomes GM staff are able to directly influence. Now, at the beginning of their interaction with a new client, GM staff ask client businesses to define their needs and what they want to achieve with the support of the program. This shift toward a more consultative approach was designed to encourage staff to work with clients to establish goals and expectations, design solutions responsive to those goals, and identify potential challenges. Commerce has also instituted new data collection procedures for the goal. It now collects data on APG performance on a regular, weekly basis, and in a way that will allow for comparisons across regions. The GM Office of Strategic Planning sends out weekly updates to leaders and managers with data on how each region, and GM as a whole, is performing against weekly and annual targets for a number of key metrics. These data on performance are also being reviewed by GM leaders in monthly review meetings to see how regions are performing in relation to one another and against established targets, and to identify challenges that offices are experiencing and effective practices that could be more widely shared. According to Commerce officials, progress on the APG is discussed in regular review meetings at the bureau and departmental levels. For example, the Deputy Under Secretary for International Trade, who oversees the daily operations of the International Trade Administration (ITA), meets regularly with the APG goal leader to discuss performance on the goal. Progress on the goal is also discussed in monthly meetings of the ITA Management Council, which consists of the senior leaders of each of ITA’s three business units. Finally, progress toward, and the management of, the APG is also discussed in department-level review meetings. Here, Commerce leaders provide APG goal representatives with specific feedback and guidance, as well as information on other agency-wide initiatives that could impact the ability of GM staff and leadership to appropriately manage the program. According to a Commerce official, the priority goal and associated measures, data collection process, reviews at multiple levels, and a system that holds staff accountable for identifying what clients want to achieve and working to deliver on those expectations have provided an increased sense of accountability and strong incentives for behavioral change amongst staff within the program. Commerce reported that in the first quarter of fiscal year 2015, 73 percent of GM clients reported that they have achieved their export objectives, exceeding the current target goal of 71 percent. Seventeen of 22 agencies reported that their data-driven reviews have had a positive effect on the efficiency of agency or program operations.Some of our selected agencies are using their review meetings as an opportunity to review the status of management improvement initiatives and to improve the efficiency of business operations. For example, DOT’s review meetings have been used to uncover and correct inefficiencies in its hiring process, which involves multiple offices throughout DOT. According to a Federal Railroad Administration (FRA) official we spoke with, staff were able to calculate how many days it took to complete each step in its hiring process and identify which offices were responsible for each step. The increased scrutiny this issue received through DOT’s review meetings led to improvements in the average number of days it now takes to hire a new employee. For example, for FRA time-to-hire decreased from approximately 160 days in fiscal year 2012 to 77 days in the third quarter of fiscal year 2014. Agency officials also emphasized that review meetings that bring together leaders and officials from across an agency can increase the overall efficiency of an agency’s decision making processes. These meetings allow leaders and managers to discuss and respond to ideas, ask questions, voice concerns, and immediately make decisions on how to move forward. If these review meetings did not exist, officials explained they would likely need to hold separate meetings, with questions and answers moving up and down the agency’s lines of communication. Having all key players present in one meeting makes this entire process more efficient and allows for more timely action. As reported through our survey and discussions with agency officials, data-driven review meetings can improve agency performance and results by increasing leadership oversight and management capacity to use performance information, focusing attention on goals and priorities, identifying areas where targeted improvements are needed, and improving communication and collaboration across an agency. Our survey data suggested, however, that sustaining a data-driven review process over time and across leadership transitions can be a challenge for agencies. Ten agencies reported that it has been a challenge to continue holding reviews despite turnover of agency or priority goal leadership, with 2 reporting it has been a great challenge, 2 a moderate challenge, and 6 a small challenge. Through our discussions with agency officials, we learned that sustaining review meetings and their positive effects requires ongoing leadership commitment and involvement, the institutionalization of review practices, and the development of a broad base of support for the reviews through a shared appreciation for the positive effects that review meetings produce. These factors build on one another, as agency leaders, participants, and those supporting the reviews engage in a cycle of ongoing actions, as shown in figure 12. First, agency officials emphasized that agency leaders’ commitment to lead, support, and remain involved in the reviews is a key element that must be in place for reviews and their positive effects to be sustained over time. The experiences of our selected agencies show that active involvement by agency leaders in review meetings is critical to establish the importance of the meetings and the clear expectation that other agency officials participate in them when called to do so. Furthermore, through their ongoing involvement in the reviews and their communication with other participants, leaders must reinforce that data-driven review meetings remain a priority and are seen as a valuable tool to achieve key agency objectives. Second, agency officials indicated that reviews in which agency leaders critically assess data on performance, and where follow-up actions are identified and tracked, should be institutionalized and made a routine part of the agency’s operations. Through GPRAMA, Congress took a critical step to help ensure agencies do this by requiring that agency leaders Agencies have also taken conduct reviews frequently and regularly.several specific steps to institutionalize their review processes. For instance, agency officials gave existing or newly established offices responsibility for supporting the review process and ensuring that meetings are carried out with regularity and consistency over time. Staff in these offices generally manage data collection, meeting preparation, and follow-up, offer training for staff, and provide analytical expertise that helps support and inform the discussion in review meetings. Agencies also established clear expectations and procedures for how reviews would be carried out, including processes for preparation and follow-up. Institutionalizing review processes in this way, and providing sufficient institutional and staff support to manage them, can also facilitate the maintenance of review processes across leadership changes, as the process is made less dependent on the management style of a particular leader. As the cycle continues, officials also noted that agencies need to continuously assess their review processes and address any identified weaknesses by incorporating improvements that respond to the needs of leaders and participants. While it is important to have a basic approach that persists over time, the ongoing assessment and improvement of review processes will help ensure reviews are adapted to meet the needs of new leaders and participants and continue to be used in a way that sustains their positive effects. For example, Commerce and SSA have recently changed their review meetings and processes to strengthen the focus on the agencies’ respective strategic goals. Officials with DOT and GSA also indicated that they are considering and instituting changes to their reviews. According to DOT officials, the agency formed a working group that examined ways to more effectively structure DOT’s review meetings. The group has also made recommendations to the Deputy Secretary regarding the format of the meetings, participation, the presentation of data, and the best ways to follow up on identified action items. The recommendations are currently under review by DOT senior leadership. At GSA, under a new Acting Administrator and Acting Deputy Administrator, additional monthly and quarterly reviews of priority initiatives and APGs are being instituted, while weekly review meetings have been refocused on a subset of GSA’s key performance measures and initiatives that are most ambitious or most in need of assistance and focus. Third, agency officials emphasized that it is critical for those leading, managing, and participating in the reviews to assess, understand, and communicate the results that review meetings produce to help develop a broad base of support throughout an agency for sustaining review processes over time. According to a number of agency officials we spoke with, for agency leaders and managers—both political appointees and career agency officials—to maintain their commitment to review processes, the reviews must demonstrate that the benefits they provide outweigh the costs in time and resources spent. If the reviews show positive results, add value for participants, and have the support of senior political and career leaders who are able to articulate their merits, this will help sustain organization-wide commitment and increase the likelihood that reviews will be continued following leadership transitions. Most federal agencies reported conducting data-driven reviews frequently and regularly, involving agency leaders and other key personnel, and using the process to assess progress on agency goals and identify strategies to address challenges or improve performance. These practices are consistent with requirements, guidance, and leading practices. Our findings also underscore the value of conducting frequent, in-person, data-driven reviews as a leadership strategy and management practice that can promote the use of performance information by agency officials and produce improved results. Our survey results indicated that agency data-driven review meetings enhanced their progress toward the achievement of agency goals, the engagement of agency leaders in the performance management process, the level of collaboration between agency officials, the ability to hold agency officials accountable for progress on goals and milestones, and the efficiency of agency operations. Through this work, however, we found that DHS was not holding in- person, data-driven reviews of its APGs, and that four other agencies were conducting reviews in a manner that was not consistent with requirements, guidance, or leading practices. We also found that data- driven review meetings should be held on a regular, frequent schedule, actively involve senior agency leaders or other key officials, involve in- depth reviews of progress on agency goals, and be supported by rigorous methods for identifying and tracking follow-up actions. Otherwise, there could be missed opportunities for these agencies’ leaders to hold officials accountable for progress toward identified goals and milestones, to take timely and better informed action to address identified challenges, and to encourage continuous improvements in agency performance and operations. To help ensure that agency review processes provide frequent, regular opportunities to assess progress on agency priority goals (APG), and are conducted in a manner consistent with GPRA Modernization Act of 2010 (GPRAMA) requirements, OMB guidance, and leading practices, we recommend the following actions: That the Secretary of Agriculture work with the COO and PIO to modify the Department’s review processes to ensure that review meetings: (1) are held at least quarterly; (2) are led by the agency head or COO; (3) involve APG leaders; (4) and involve, as appropriate, agency officials with functional management responsibilities. That the Secretary of Defense work with the COO and PIO to modify the Department’s review processes to ensure that review meetings: (1) are led by the agency head or COO; (2) are used to review progress on all APGs at least once a quarter, discuss at-risk goals and improvement strategies, and assess whether specific program activities, policies, or other activities are contributing to goals as planned; and (3) are used by participants to identify, agree upon, document and track follow-up actions. That the Secretary of Health and Human Services work with the COO and PIO to modify the Department’s review process to ensure that progress on each APG is reviewed in an in-person review meeting at least quarterly. That the Secretary of Homeland Security work with the COO and PIO to reestablish regular, in-person, data-driven review meetings conducted in a manner consistent with the requirements of GPRAMA, OMB guidance, and leading practices outlined in this report. That the Secretary of State work with the COO and PIO to modify the Department’s review processes to ensure: (1) that progress on each APG is reviewed in an in-person review meeting at least quarterly; (2) and that the reviews are led by the agency head or COO; and (3) involve, as appropriate, agency officials with functional management responsibilities. We provided a draft of this report for review and comment to the Secretaries of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, Interior, Labor, State, Transportation, Treasury, and Veterans Affairs; the Attorney General of the United States; the Directors of the Office of Management and Budget, the Office of Personnel Management, and the National Science Foundation (NSF); the Administrators of the Environmental Protection Agency, National Aeronautics and Space Administration (NASA), Small Business Administration (SBA); the Acting Administrators of the General Services Administration (GSA) and the U.S. Agency for International Development (USAID); and the Acting Commissioner of Social Security. We received written comments from the Departments of Defense (DOD), Health and Human Services (HHS), Homeland Security (DHS), and State, and the Social Security Administration (SSA). These responses are reproduced in appendixes IV through VIII. The Department of Agriculture (USDA) provided its response in an email transmitted on June 17, 2015. DHS, HHS, and USDA concurred with our recommendations. DOD and State concurred with all but one recommendation—to ensure the COO leads the reviews—with which they partially concurred. In its response, DOD concurred with our recommendations that agency leaders modify the agency’s review process to ensure that reviews are held to assess progress on APGs at least once a quarter, to identify goals at risk, to assess contributions, and to identify and track follow-up actions. DOD’s response said that the agency plans to comply with the recommendations by November 30, 2015. DOD partially concurred with our recommendation that the agency ensure the reviews are led by at least the COO. DOD, in its response, outlined a more specific role for its COO in future reviews. However, we do not believe this role is sufficient to bring DOD’s practices in line with the requirements of GPRAMA or with OMB guidance that the agency head and/or the COO conduct the reviews. In its response, State concurred with our recommendations that reviews should be held on a quarterly basis and that the reviews involve applicable functional management officials. State did not explicitly agree or disagree with our recommendation that the reviews be led by the agency head or COO. However, the agency’s response indicated that the agency would continue to have the PIO lead their review meetings. As outlined in the report, this is not consistent with the requirements of GPRAMA or with OMB guidance that the agency head and/or the COO conduct the reviews. We believe that DOD and State have both interpreted the language of OMB’s Circular A-11 in a way that provides them with the flexibility to delegate responsibility for conducting data-driven performance reviews to the PIO. As DOD notes, A-11 provides agencies with flexibility at key points to design a performance management system that best meets the agency’s needs. However, it is also important to emphasize that OMB’s Circular A-11 guidance clearly and unambiguously states in six separate sections that the COO is responsible for running agency reviews, and that these reviews should be held quarterly. The guidance also specifies that reviews must be held in person. As OMB has also noted, the personal engagement of agency leaders demonstrates commitment to improvement across the organization, ensures coordination across agency silos, and enables rapid decision making. The personal engagement of the COO in the data-driven reviews of progress on APGs is also critical given that, under GPRAMA, APGs are to reflect the agency’s highest priorities, and COOs are responsible for improving the management and performance of the agency through the regular assessment of progress and the use of performance information. For these reasons, we believe that these recommendations to follow requirements and guidance remain valid. The following agencies provided technical comments that were incorporated into the draft as appropriate: Department of Energy, NASA, NSF, SBA, SSA, USAID, and USDA. The following agencies had no comments on the draft report: The Departments of Commerce, Labor, Education, Housing and Urban Development, Interior, Justice, Treasury, and Veterans Affairs; the Environmental Protection Agency; the General Services Administration; the Office of Management and Budget; and the Office of Personnel Management. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XI. This report is part of our response to a statutory requirement that we evaluate how the implementation of the GPRA Modernization Act of 2010 (GPRAMA) is affecting performance management in federal agencies, and whether performance management is being used by agencies to improve the efficiency and effectiveness of agency programs. Specifically, this report examines (1) the extent to which agencies are conducting data-driven performance reviews in a manner consistent with GPRAMA requirements, Office of Management and Budget (OMB) guidance, and leading practices; and (2) how agency data-driven performance reviews have affected performance, collaboration, accountability, and efficiency within agencies, and how positive effects can be sustained. To address these objectives, we assessed review practices at the 23 executive agencies that fall under the purview of the sections of GPRAMA that relate to agency performance reviews. GPRAMA states that the 24 agencies identified by the amended Chief Financial Officers Act of 1990 (CFO Act), or those agencies otherwise determined by OMB, are required to develop agency priority goals (APG) and to review progress on these Because OMB did not require by conducting reviews at least quarterly. the Nuclear Regulatory Commission to develop APGs for 2014-2015, we confined our study to the other 23 CFO Act agencies. To address both objectives, we surveyed Performance Improvement Officers (PIOs) at the 23 executive agencies. This method allowed us to collect government-wide information about the variety of practices being used by agencies to review progress on goals. We asked PIOs for information about the frequency of review meetings; leadership of, and participation in, review meetings; preparation for, execution of, and follow- up on, review meetings; challenges; and perceived effects of review meetings on agency performance, collaboration, efficiency, and accountability for results. Although agencies may conduct a variety of reviews as part of their performance management processes, we asked respondents to consider only in-person, data-driven review meetings that included discussion of APGs when answering our survey questions. GPRAMA provisions apply directly to these types of reviews. We administered the surveys between October and December 2014, transmitting and receiving the surveys as attachments to e-mails. We received responses from all 23 agency PIOs, signifying a 100 percent response rate. To minimize errors related to difficulties interpreting questions, we pretested the survey with three agency performance management officials to ensure that our questions were clear, complete, and unbiased, and that answering the survey did not place an undue burden on respondents. One of our methodology specialists assisted in developing our survey to ensure that survey questions captured the intended information. We revised our survey questions based on feedback from the pretesters and our methodologist. We verified our data entry and analysis programs for accuracy. To further address both objectives, we selected five case-study agencies for a more in-depth assessment of their data-driven review processes. To supplement government-wide data collected through our survey, we used these more in-depth reviews to gather information about specific agency practices and about agency officials’ perceptions of any impacts that review meetings have had. This allowed us to collect additional detail and illustrative examples. We selected a sample of agencies that reflected a range of key characteristics, while excluding agencies from our selection that had been used as case studies for related recent or ongoing work, to avoid overburdening those agencies. The key characteristics we identified were agency size, as indicated by number of civilian employees; the extent to which agency leadership uses quarterly performance reviews to drive progress toward goals, as reported by respondents to GAO’s 2013 Federal Managers Survey; and agency compliance with basic GPRAMA requirements to hold quarterly in-person performance review meetings, as identified by a GAO team conducting related work. Using these criteria, we selected the Departments of Commerce (Commerce), Health and Human Services (HHS), and Transportation (DOT); the General Services Administration (GSA); and the Social Security Administration (SSA). Our sample of agencies is non-generalizable, and should not be considered representative of all agencies. At each case-study agency selected for in-depth review, we also selected APGs to obtain the perspective of APG leaders and their staff on the agency’s data-driven review process. For those agencies with more than two APG leaders, we developed a selection process to determine which APG leaders we would request to interview. When possible, we excluded APG leaders who had been selected for interviews as part of related recent or ongoing work to avoid overburdening those officials. We selected goal leaders to produce a sample that reflected varied agency progress against APGs, where we excluded APGs for which progress was unclear. In cases when we had to choose between two APGs with the same type of progress, we prioritized the APG with a larger number of indicators. In cases where we selected an official who leads two APGs, we selected both APGs, resulting in three total APGs selected for the agency. To allow us to corroborate information collected through surveys, interviews, and observations, and strengthen our confidence in the reliability of the self-reported survey responses, we requested supporting documents from 12 agencies, representing more than 50 percent of the agencies surveyed, related to review meeting frequency, leadership, participation, content, and follow-up. The 12 agencies included the 5 agencies selected for more in-depth review, several agencies whose survey responses required clarification, and several additional agencies at random. Examples of documents submitted by agencies included review meeting attendance or invitations, agendas, presentation slides, and briefings and summary reports. Based on our findings, we determined that the survey data were sufficiently reliable for the purposes of this report. In conducting the survey and subsequent follow-up, we learned that the Department of Homeland Security (DHS) does not hold in-person review meetings, and has not done so since December 2013. For this reason, the summaries of survey responses in this report exclude DHS. We also addressed both objectives by interviewing agency officials who play a central role in the review meetings, including the agency Chief Operating Officer (COO), the PIO, and two APG leaders. These interviews provided us with detailed information from individual officials’ perspectives and helped us to corroborate information collected through other means. We used a consistent set of questions for each type of official, which included the official’s objectives for review meetings; the official’s role in preparing for, participating in, and conducting follow-up after review meetings; and the official’s experience of any effects of review meetings. We also interviewed staff from OMB and the Performance Improvement Council (PIC) to obtain information on previous surveys of agency PIOs administered by the PIC, their perspective on the implementation and effectiveness of reviews, and to learn about the role played by the PIC’s Internal Reviews Working Group, which has served as a forum for agency performance staff to periodically come together to discuss performance review practices in their agencies. We also addressed both objectives by observing agency-level review meetings at HHS, GSA, and SSA, as well as one sub-agency-level review meeting at GSA. Observing review meetings allowed us to gain firsthand knowledge of review meeting processes. This served to provide context, increase our familiarity with the process, and corroborate information gained through other means. While we requested to observe a review meeting at both Commerce and DOT, we were not allowed to do so due to agency concerns that our presence could inhibit open discussion. To address the first objective, we compared what we learned about the review processes at all 23 agencies with requirements for review meetings established in GPRAMA, as well as standards set forth in guidance in OMB’s Circular A-11 and leading practices for data-driven To address the second objective, we reviews we previously identified.analyzed our survey data to determine how agencies characterized the effects of their review meetings. We also used interviews with officials and documentation from our selected agencies to identify illustrative examples of the effects review meetings have produced, and to identify actions they have taken to sustain the benefits of the reviews. Because the scope of our review was to examine the implementation of data-driven review processes in agencies, we did not evaluate whether APGs were appropriate indicators of performance, sufficiently ambitious, or met other dimensions of quality. Although agency performance information was reported in illustrative examples of data-driven review meeting materials, as well as illustrative examples of the effects of review meetings, we did not independently assess the accuracy of the agency performance information cited in these examples. We conducted our work from July 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusion based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our finding and conclusions based on our audit objectives. Description In 2014 and 2015, the Administrator of GSA convened weekly review meetings. At least once a month, the Public Buildings Service and the Federal Acquisition Service, which had responsibility for the agency’s two agency priority goals (APG), presented on their key performance measures, including APGs. Other offices within the agency also presented on their goals once a month. The Administrator also held additional weekly meetings with the heads of PBS and FAS to discuss key performance measures, including APGs, in greater detail. In 2014, the Deputy Secretary of Commerce held biweekly review meetings, with each meeting focusing on one to three of the agency’s 27 strategic objectives, and including a review of progress on related APGs. In 2015, Commerce changed the frequency of review meetings from biweekly to monthly. Beginning in 2015, the Deputy Secretary held monthly meetings, with each meeting focusing on reviewing progress on one of the agency’s five strategic goals, including related APGs. The Deputy Secretary also held monthly check-in meetings with each of the five strategic goal teams to discuss progress on the goals and related APGs. In 2014, the Acting Commissioner of SSA held quarterly review meetings that covered all four of SSA’s APGs, as well as selected additional performance measures. In 2015, in addition to these quarterly reviews, SSA began to convene additional “theme-based” reviews to discuss cross-cutting issues, like human resource management, which, according to SSA officials, are also intended to include a discussion of relevant APGs. SSA plans to hold three of these “theme- based” reviews in 2015. In 2014 and 2015, the Deputy Secretary held four to five “management review meetings” a year with representatives of each of DOT’s 10 operating administrations. These meetings focused primarily on the status of each operating administration’s pending and proposed regulations, but also included a discussion of relevant performance measures, including APGs. In 2014 and 2015, twice a year the Deputy Secretary and Performance Improvement Officer held in-person review meetings to review progress on each of the agency’s five APGs. According to HHS officials, agency leaders reviewed written progress updates for each APG during quarters without in-person review meetings. To address our research questions related to agency data-driven review practices and effects we distributed a survey to the performance improvement officer (PIO) at each of the 23 agencies with agency priority goals (APGs). We received responses from all 23 PIOs. Through our survey and subsequent follow-up, however, we learned that the Department of Homeland Security (DHS) is not currently holding in- person data-driven review meetings, so its responses are not included in the aggregated results presented below. Therefore, unless otherwise indicated, the number of responses to each question is 22. There were nine questions in the survey, six of which contained multiple subquestions. Tables 1 through 9 below show our survey questions and aggregated responses. For more information about our methodology for designing and administering the survey, see appendix I. In addition to the contact named above, Elizabeth Curda (Acting Director) and Adam Miles supervised the development of this report. Linda Collins, Shelby Kain, Kathleen Padulchick, Steven Putansu, and A.J. Stephens made significant contributions to this report. Dierdre Duffy and Robert Robinson also made key contributions.
How federal leaders manage the operations and performance of their agencies significantly affects their ability to achieve important outcomes critical to public health and safety. GAO's previous work has identified weaknesses in agencies' use of performance information that can hinder achievement of critical results. This report is part of GAO's response to a statutory requirement to review GPRAMA implementation. It examines (1) the extent to which agencies are conducting data-driven performance reviews consistent with GPRAMA requirements, OMB guidance, and leading practices; and (2) how reviews have affected performance, collaboration, accountability, and efficiency in agencies, and how positive effects can be sustained. GAO surveyed PIOs at 23 agencies, followed up to clarify responses, and interviewed officials involved in reviews at 5 agencies. These agencies were selected based on size and the extent to which leaders use reviews, as reported on a 2013 survey. GAO also reviewed OMB guidance and relevant documentation from agencies. The GPRA Modernization Act of 2010 (GPRAMA) requires that federal agencies review progress on agency priority goals (APG) at least once a quarter. GPRAMA requires that reviews be conducted by top agency leaders, involve APG goal leaders and other contributors, and be used to identify at-risk goals and strategies to improve performance. While GPRAMA requires that agencies conduct reviews, it also required the Office of Management and Budget (OMB) to prepare guidance on its implementation. Since 2011, OMB has provided guidance on how reviews should be conducted, specifying they should be held in person. Further, GAO previously identified nine leading practices for reviews. Agencies Reported Review Practices Consistent with Requirements and Guidance. Of the 23 agencies GAO surveyed, most reported conducting data-driven reviews consistent with requirements, guidance, and leading practices. Specifically, most agencies reported: conducting data-driven review meetings at least once a quarter, with several agencies holding them more frequently (20 agencies); conducting Chief Operating Officer (COO)-led reviews, or reviews led jointly by the COO and Performance Improvement Officer (PIO) (19); always or often involving PIOs (22) and APG goal leaders (21) in reviews; always or often collecting and analyzing relevant data in advance of reviews, and incorporating these data into meeting materials (22); always or often using review meetings to assess APG progress (20); and always or often identifying follow-up actions to be taken after review meetings (18), an action that is positively correlated with the reported impact of reviews on agency performance improvement. Agency Review Practices Inconsistent with Requirements and Guidance. Some agency practices were inconsistent with requirements or guidance. For instance, the Department of Homeland Security (DHS) reported that it does not hold in-person reviews, and the Departments of Agriculture (USDA) and Health and Human Services (HHS) reported that they do not hold regular, in-person reviews each quarter. The Department of State (State) reported that progress on each APG is only reviewed in an in-person review once a year, rather than each quarter, as required. The Department of Defense (DOD), USDA, and State also reported that their reviews are not led by their agency heads or COO. DOD also reported it rarely identifies follow-up actions to be taken after meetings. Agencies Reported Positive Effects of Reviews . Most agencies reported their reviews have had positive effects on progress towards agency goals, collaboration between agency officials, the ability to hold officials accountable for progress, and efforts to improve the efficiency of operations. According to agency officials, reviews can bring together people, analytical insights, and resources to rigorously assess progress on goals or milestones, develop collaborative solutions to problems, enhance individual and collective accountability for performance, and review efforts to improve efficiency. Agencies reported that sustaining these effects requires ongoing leadership commitment, institutionalizing review processes, and demonstrating value to participants. To ensure that agency reviews are consistent with requirements, guidance, and leading practices, GAO is making recommendations to five agencies. DHS, HHS, and USDA concurred with the recommendations. DOD and State concurred with all but one recommendation—to ensure the COO leads the reviews—with which they partially concurred. GAO believes these recommendations are valid, as discussed in the report.
The LDA, as amended by the HLOGA, requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and file quarterly reports disclosing their activities. No specific requirements exist for lobbyists to create or maintain documentation in support of the registrations or reports they file. Under the LDA, lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point. The LDA also provides that registrations and reports must be available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. The LDA defines a “lobbyist” as an individual who is employed or retained by a client for compensation who has made more than one lobbying contact (written or oral communication to a covered executive or legislative branch official made on behalf of a client) and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who are lobbyists on behalf of a client other than that person or entity. Lobbying firms are required to file a registration with the Secretary of the Senate and the Clerk of the House for each client if the lobbying firm receives over $3,000 in income from that client for lobbying activities. Lobbyists are also required to submit a quarterly report, an LD-2 report, for each registration filed. The registration and subsequent LD-2 reports must disclose the name of the organization, lobbying firm, or self-employed individual that is lobbying on that client’s behalf; a list of individuals who acted as lobbyists on behalf of the client during the reporting period; whether any lobbyists served as covered executive branch or legislative branch officials in the previous 20 years; the name of and further information about the client, including a general description of its business or activities; information on the general issue area and specific lobbying issues; any foreign entities that have an interest in the client; the client’s status as a state or local government; information on which federal agencies and house(s) of Congress the lobbyist contacted on behalf of the client during the reporting period; the amount of income related to lobbying activities received from the client (or expenses for organizations with in-house lobbyists) during the quarter rounded to the nearest $10,000; and a list of constituent organizations that contribute more than $5,000 for lobbying in a quarter and actively participate in planning, supervising, or controlling lobbying activities, if the client is a coalition or association. The LDA also requires lobbyists to report certain contributions semiannually in the contributions report, or the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each organization registered to lobby and by each individual listed as a lobbyist on an organization’s lobbying reports. The lobbyists or organizations must list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which they made contributions equal to or exceeding $200 in the aggregate during the semiannual period. The lobbyists or organizations must also report contributions made to presidential library foundations and presidential inaugural committees. In addition, the lobbyists or organizations must report funds contributed to pay the cost of an event to honor or recognize a covered official, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official or to pay the costs of a meeting or other event held by or in the name of a covered official. Finally, the LD-203 report requires lobbyists or organizations to certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules. Each individual lobbyist and organization must file a LD-203 report each period and certify compliance with the gift and travel rules, even if there are no contributions to report. The Office is responsible for the enforcement of the LDA. The Office fulfills its administrative responsibilities by researching and responding to referrals of noncomplying lobbyists submitted by the Secretary of the Senate and Clerk of the House. The Office sends additional noncompliance notices to the lobbyists, requesting that the lobbyists file reports or correct reported information. The Office also has the authority to pursue a civil or criminal case for noncompliance. An electronic system has been developed and implemented in response to a recommendation in our prior report, specifically to address issues we raised regarding the tracking, analysis, and reporting of enforcement activities for lobbyists who were referred to the Office for failure to comply. The officials said that the new system is designed to provide a more structured approach for assigning resources and to better focus lobbying disclosure compliance enforcement efforts. The new system is intended to track and record enforcement activities, record the status and disposition of lobbyists’ cases, provide automated alerts to ensure timely follow-up and monitoring, provide the ability to track those who continually fail to comply with the LDA, and use data to report statistical trends to track the effectiveness of enforcement activities. Neither the LDA nor guidance requires lobbyists to maintain records or documentation to support information disclosed in their reports. However, similar to our findings in last year’s review, most lobbyists reporting $5,000 or more in income or expenses were able to provide written support for certain elements of individual activity reports we examined. For example, most lobbyists were able to provide documentation to support income or expenses related elements of their reports. We estimate that lobbyists could provide written documentation for income or expenses for an estimated 88 percent of the disclosure reports for the fourth quarter of 2008 and the first three quarters of 2009. Lobbyists for all but 11 of the 112 reports that we reviewed from our sample that reported income and all but 5 of the 22 sampled reports with lobbying expenses provided some form of documentation for the dollar amounts reported. The most common form of income documentation provided was invoices (an estimated 68 percent of all reports with income), followed by contracts (an estimated 24 percent of all reports with income). Also, we estimate that lobbying firms were able to provide documentation that all lobbyists listed on the disclosure report were employed as lobbyists at the lobbying firm during the reporting period for an estimated 89 percent of reports that required this information. More than half of lobbyists in our sample were able to provide documentation to support all of the entities they reportedly lobbied during the reporting period. Lobbyists are required to disclose if they lobbied covered officials at the House of Representatives, the Senate, one or more executive branch agencies, or a combination of these entities. For close to three quarters of reports disclosing House or Senate lobbying activity (an estimated 70 percent), lobbyists had documentation to support the House and Senate lobbying contacts they disclosed. However, lobbyists that reported contacts with agencies were only able to provide documentation for about half of reports (31 of 66 reports we reviewed) to support the agency lobbying contacts they reported in the disclosure reports. Too few reports in our sample disclosed foreign entities, affiliated organizations, and the names of individuals no longer acting as lobbyists to provide reliable estimates of levels of written documentation in support of the reports that required this information. Lobbyists did not disclose covered official positions previously held by individual lobbyists on at least 6 of the 131 applicable reports we reviewed. Based on this information, we estimate that a minimum of 2 percent of all disclosure reports fail to fully disclose whether the individual lobbyists for a specific client held a covered official position. Lobbyists gave several reasons for not including previously held covered official positions, typically indicating that they misunderstood the requirements or did not realize the position held qualified as a covered official position. To correct errors or omissions, 12 lobbyists amended 12 of the 134 disclosure reports in our sample prior to our review. Additionally, 15 lobbyists indicated that they planned to amend their disclosure reports after our review. As of March 18, 2010, 7 of the 15 lobbyists had amended their disclosure reports. Indicating that a lobbyist held a covered official position, changing income or expense amounts, or disclosing a foreign entity were the most commonly cited reasons for filing amendments. Although the LDA and guidance do not require lobbyists to maintain records or documentation to support information disclosed in their reports, many of the lobbyists we spoke with had systems to track lobbying contacts and the amount of time spent on lobbying activities. In an estimated 79 percent of reports (106 of 134 we reviewed), lobbyists reported having a method or system in place to track lobbying contacts and activities. We estimate that 57 percent of all reports with tracking methods monitored actual time spent lobbying on behalf of the client as a means of tracking lobbying activities. In addition, we estimate that meetings were tracked to support the information in 75 percent of reports where a tracking system was used, e-mails for 53 percent of reports where a tracking system was used, and telephone conversations to identify and document the work the lobbyist performed on behalf of a client for 49 percent of reports where a tracking system was used. As previously noted, all individual lobbyists and organizations reporting specific lobbying activity are required to file LD-203 reports each period, even if they have no contributions to report, because they must certify compliance with the gift and travel rules. As part of our LD-2 report analysis, we checked the House database to ensure that each lobbyist and organization listed on the LD-2 report filed an LD-203 report during the reporting period. For an estimated 84 percent of lobbying reports where this information was required (110 of the 131 applicable reports in our sample), the LD-203 reports were filed as required by both the lobbyists and the lobbying firms during the reporting periods in question. Some lobbyists told us that the guidance on the LD-203 report was confusing. For example, several lobbyists told us that they were confused regarding whether to file an LD-203 report if the lobbyist or lobbying firm did not make any political contributions during the reporting period. Individual lobbyists and lobbying organizations are required to file federal campaign and political contributions reports, even if they did not make any contributions during the reporting period. In addition to the brief check of LD-203 compliance listed above, we conducted a detailed analysis of LD-203 reports, sampling 100 reports that list contributions and 100 reports that list no contributions made during the reporting period. Lobbyists or lobbying firms could support all listed contributions with documentation for approximately 82 percent (82 of 100) of the contribution reports listing contributions that we reviewed. Of the 100 contribution reports in our sample listing no contributions, we confirmed that 97 did not have clearly corresponding contributions listed in the FEC database during the reporting period, while 3 (or 3 percent) of the reports we reviewed that did not list any contributions failed to list at least one donation that should have been disclosed. Documentation includes data from the FEC disclosure database, canceled checks, invoices, or letters. Table 1 shows the number of LD-203 reports with contributions that were supported in the FEC database and LD-203 reports that were missing contributions. Based on the 18 reports that failed to report all contributions in our sample of reports with contributions, we estimate that at least 12 percent of all reports listing contributions are missing one or more contributions. For example, 11 filers said that they did not report the information we found in the FEC database because of an oversight. Of the 18 LD-203 reports with contributions we reviewed that failed to list at least one contribution, only 9 were missing more than one contribution. All of the lobbyists said that they did not report the information listed in the FEC database because of an oversight and plan to amend the reports. Overall, we estimate that a minimum of 5 percent of all LD-203 reports—whether they listed contributions or not—omitted one or more donations that were required to have been disclosed. To determine whether new registrants were meeting the requirement to file, we matched newly filed registrations in the fourth quarter of 2008 and the first, second, and third quarters of 2009 from the House and Senate Lobbyist Disclosure Databases to their corresponding quarter disclosure reports using an electronic matching algorithm that allowed for misspelling and other minor inconsistencies between the registrations and reports. Our analysis showed that of the 6,184 new registrations we identified in fiscal year 2009, the majority (5,489 or 89 percent) had clearly corresponding disclosure reports on file, indicating that the requirement for these lobbyists to file reports for specific clients was generally met. We could not readily identify corresponding reports of lobbying activity for 695 (approximately 11 percent) of the 6,184 new registrations, likely because either a report was not filed or reports that were filed contained information, such as client names, that did not match. The Clerk of the House and Secretary of the Senate routinely review the completeness of registrations and reports and follow up with lobbyists. Similar to our findings in prior reviews of lobbying disclosure, some lobbyists may not fully understand the law and therefore did not properly disclose information. Some lobbyists said that they thought the reporting requirements were clear and the Secretary of the Senate and Clerk of the House staff were helpful in providing clarifications when needed. However, our review of lobbyists’ documentation and some lobbyists’ statements highlights areas of inconsistency in reporting information on the LD-2 report and the LD-203 report. For example, our review identified that lobbyists in our sample inconsistently reported “covered official positions” previously held by individuals. As stated earlier, covered official positions are either an elected member of either house of Congress, an employee of a member or a committee, or certain high-level positions in the executive branch. Guidance published by the Clerk of the House and Secretary of the Senate advises registrants to disclose new lobbyists who are not listed on a client registration on the quarterly disclosure report and include any covered executive or legislative branch official positions the new lobbyists held within 20 years of that filing. Several lobbyists in our sample disclosed their covered official positions in a variety of ways. While guidance only directs filers to list the covered official positions on forms denoting the lobbyists as new lobbyists, in several reports in our sample lobbyists reported covered official positions on more than one LD-2 report, even if a lobbyist was not listed as “new.” A few other lobbying firms amended the lobbying LD-1 to include new lobbyists in addition to new or previously undisclosed covered official positions and therefore left subsequent quarterly disclosure reports blank. Lobbyists told us that they were unclear about the frequency with which they had to disclose their covered official position, specifically, whether they had to disclose the covered official position on the LD-1, the LD-2 report, or both. The LDA and lobbying guidance direct lobbyists to disclose covered official positions on either the initial client registration (LD-1) or on subsequent LD-2 quarterly reports as lobbyists are added. In addition, some lobbyists cited difficulty determining whether the previous positions held within the executive or legislative branches were covered positions. In addition, several lobbyists told us that they were unsure about when and how to terminate lobbyists from LD-2 reports. House and Senate guidance directs registrants to list terminated lobbyists, or lobbyists no longer expected to act as lobbyists for a given client, in line 23 of the LD-2 report. Several lobbyists indicated that they were not sure if they needed to terminate a lobbyist who did not actively lobby on behalf of a client for a given reporting period, or if they only needed to terminate lobbyists when they were certain the lobbyists would not lobby for the client at all in the future. The guidance states that a lobbyist can be left off an LD-2 report (without being terminated) if the lobbyist did not meet the LDA’s definition of lobbyist for that client in the current or next quarter. The guidance advises that lobbyists should be terminated if the lobbyists are no longer expected to lobby on behalf of that client in the future, as the lobbyists job duties, assignment, or employment status changes. Lobbyists also told us that they found meeting the deadline for filing disclosure reports difficult because of administrative constraints. The deadline for filing disclosure reports is 20 days after each reporting period, or the first business day after the 20th day if the 20th day is not a business day. Prior to enactment of the HLOGA, the deadline for filing disclosure reports was 45 days after the end of each reporting period. The lobbyists cited limitations of their own record-keeping systems and in some cases the large volume of disclosure reports that needed to be filed as the specific reasons why meeting the deadline was challenging. The LDA requires the Secretary of the Senate and Clerk of the House to provide guidance and assistance on registration and reporting requirements and to develop common standards, rules, and procedures for compliance. The guidance is revised every 6 months based on comments the Secretary of the Senate and Clerk of the House receive. The guidance may also be revised when issues arise as a result of statutory and administrative responsibilities. The Office fulfills its responsibility for enforcing compliance with the LDA by researching and responding to referrals of noncomplying lobbyists forwarded from the Secretary of the Senate and the Clerk of the House. The Office reviews these referrals and sends additional noncompliance notices to the lobbyists, when warranted, requesting that they file reports or correct reported information. Continued failure to comply may lead the Office to prosecute. Officials from the Office have made progress in developing an electronic system to address issues we raised in our prior report regarding the tracking, analysis, and reporting of enforcement activities. Our prior report recommended that the Office complete efforts to develop a structured approach that would require it to track referrals when they are made, record reasons for referrals, record the actions taken to resolve them, and assess the results of actions taken. The new tracking system became operational in April 2009, and officials from the Office stated that the system has enhanced their ability to enforce lobbyists’ compliance with the LDA. The system allows officials from the Office to track referral and enforcement actions and to monitor lobbyists who continually fail to file the required disclosure reports. The Office has completed entering referral data from prior years and is continuing to update the system by inputting referral data received from the Secretary of the Senate and the Clerk of the House. The information is used to produce referral actions, referral summaries and summary reports of chronic offenders who are found repeatedly out of compliance with the LDA. Officials from the Office stated that the system has provided easy access to reporting data since it became operational in 2009 and has the potential to target enforcement actions. However, the system and summary information are still being refined and the Office has not instituted procedures to ensure data are accurate and reliable. One such procedure may be to establish reliability checks to ensure that data added to the system are accurate and run system tests to ensure that there are no programming errors. Officials from the Office stated that they recognize the importance of establishing reliability checks and plan to institute such assessments in the next few months. The number of lobbyists referred to the Office has increased, as expected, because of the LDA’s new requirement to disclose federal campaign and other political contributions by filing LD-203 reports in addition to disclosing lobbying activity on LD-2 reports. In 2009 the Office received referrals from both the Secretary of the Senate and the Clerk of the House for noncompliance with reports filed for the 2007 and 2008 reporting periods. In addition, the Office received referrals for the first three quarters of 2009 from the Secretary of the Senate. In January 2010, the Office received referrals for quarters one and two of the 2009 reporting period, but the data has not been entered into the system and the numbers are not yet known. As of March 4, 2010, the Office has not received referrals from the Clerk of the House for the third quarter of the 2009 reporting period. The Clerk of the House takes longer to send referrals because this office uses different referral procedures, such as reviewing the data for duplicate referrals before they are sent to the Office. Referrals are not made immediately after the filing period. There is a minimum of 120 days between the end of the filing period and the date referrals are sent because the Secretary of the Senate and Clerk of the House send referrals after they have reviewed their respective databases for missing or erroneous reports, twice contacted lobbyists by letter to inform them of the need to remedy errors or file a missing reports, and allowed 60 days for lobbyists to respond to each letter. The Office received a total of 368 referrals for noncompliance with disclosure requirements from the Secretary of the Senate and Clerk of the House for the 2007 calendar year. The 2007 referrals were for LD-2 reports that were disclosed before the enactment of the HLOGA and therefore were submitted semiannually instead of quarterly and did not include referrals for LD-203 reports. Table 2 shows the number of LD-2 referrals received from the Secretary of the Senate and Clerk of the House as well as the number of noncompliance letters the Office sent to lobbyists as a result of these referrals. According to the Office, the number of referrals from the Secretary of the Senate is larger than the number from the Clerk of the House because of differences in their referral procedures. The Office received a total of 1,099 referrals from the Secretary of the Senate and the Clerk of the House for noncompliance with the quarterly LD-2 reporting requirements for periods after the enactment of the HLOGA. As of March 26, 2010, the Office has received 730 LD-2 referrals for calendar year 2008 from the Secretary of the Senate and the Clerk of the House. Additionally, the Office has received 369 LD-2 referrals for first three quarters of 2009 from the Secretary of the Senate. As previously stated, the Office received referrals from the Clerk of the House for the first two quarters of 2009 in January 2010, but the data has not been entered into the system and the numbers are not yet known. The Office has not received referrals from the Clerk of the House for the third quarter 2009 reporting period. The Clerk of the House takes longer to send referrals because the office uses different referral procedures, such as reviewing the data for duplicate referrals before they are sent to the Office. Table 3 shows the number of referrals received from the Secretary of the Senate and Clerk of the House as well as the number of noncompliance letters the Office sent to lobbyists as a result of these referrals. The Office has also received referrals for noncompliance with the HLOGA requirement to file LD-203 reports. To date, the Office has received 2,486 LD-203 noncompliance referrals for the 2008 calendar year and 194 LD-203 noncompliance referrals for the first half of the 2009 calendar year. Officials from the Office stated that similar to the LD-2 referrals, the number of LD-203 referrals from the Secretary of the Senate is larger than the number from the Clerk of the House because of differences in their referral procedures. Letters of noncompliance with the LD-203 requirement have not been sent by the Office. Officials from the Office stated that they plan to send the letters for noncompliance with LD-203 reporting requirements in May 2010. Table 4 shows the number of LD-203 noncompliance referrals received from the Secretary of the Senate and the Clerk of the House. To enforce LDA compliance, the Office has primarily focused on sending letters to lobbyists who have potentially violated the LDA by not filing disclosure reports as required. The letters request that the lobbyists comply with the law and promptly file the appropriate disclosure documents. Resolution typically involves the lobbyists coming into compliance. In response to the 653 letters sent by the Office in 2007, 2008, and 2009, 163 lobbyists have come into compliance. Because there is a time lapse between when the Secretary of the Senate and the Clerk of the House send the first contact letters and when they make referrals to the Office, lobbyists may have responded to the contact letters from the Secretary of the Senate and Clerk of the House after referrals have been received by the Office. As a result, the Office reviews the Secretary of the Senate and Clerk of the House databases to determine whether a lobbyist has already resolved the compliance issue before sending out its own letters. In addition, the Office attempts to verify the lobbyist’s address when letters are returned or no response is received. Table 5 shows the status of enforcement actions as a result of noncompliance letters the Office sent to lobbyists. In our 2008 lobbying disclosure report, we noted that the Office had settled with three lobbyists and collected civil penalties totaling about $47,000 in 2005. All of the settled cases involved a failure to file. Since then no additional settlements or civil actions have been pursued, although the Office is following up on hundreds of referrals each year. Also in our 2009 lobbying disclosure report, we reported that the Office had identified six lobbyists whose names appeared frequently in the referrals and sent them letters more targeted toward repeat nonfilers. Four of these lobbyists have resolved their noncompliance issues, and the Office continues to consider further enforcement actions for the other two. Officials from the Office stated that they plan to begin using information from the Chronic Offenders Report generated by their tracking and monitoring system to begin targeting additional repeat nonfilers in the summer. We provided a draft statement of the facts contained in this report to the Department of Justice (DOJ) for review and comment. We met with the Assistant U.S. Attorney for the District of Columbia, who on behalf of DOJ provided us with technical comments, which we incorporated as appropriate, but did not otherwise comment on the report. We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. This report also is available at no charge on the GAO Web site at http://www.gao.gov. Please contact Laurie Ekstrand at (202) 512-6845 or [email protected] if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The Honorable Edolphus Towns Chairman The Honorable Darrell Issa Ranking Member Committee on Oversight and House of Representatives The Honorable John Conye Chairman The Honorable Lamar Smith Ranking Member Committee on the Judiciary House of Represen rs, Jr. Consistent with the requirements of the Honest Leadership and Open Government Act of 2007, our objectives were to determine the extent to which lobbyists can demonstrate compliance by providing support for information on registrations and reports filed in response to requirements of the amended Lobbying Disclosure Act of 1995 (LDA); identify the challenges and potential improvements to compliance by lobbyists, lobbying firms, and registrants; and describe the efforts the U.S. Attorney’s Office for the District of Columbia (the Office) has made to improve its enforcement of the LDA, including identifying trends in past lobbying disclosure compliance. To respond to our mandate, we used information in the lobbying disclosure databases maintained by the Secretary of the Senate and the Clerk of the House of Representatives. To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and spoke to officials responsible for maintaining the data. Although registrations and reports are filed through a single Web portal, each chamber subsequently receives copies of the data and follows different data cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases that result from chamber differences in data processing. For example, Senate staff told us that they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database, and as a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance. House staff told us that they rely heavily on automated processing, and that while they manually review reports that do not perfectly match information on file for a given registrant or client, they will approve and upload such reports as originally filed by each lobbyist even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reason to believe that the content of the House and Senate systems would vary substantially. While we determined that both the House and Senate disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure reports (LD-2 reports) and for assessing whether newly filed registrants also filed required reports, we chose to use data from the Clerk of the House for sampling LD-2 reports from the last quarter of 2008, first three quarters of 2009, as well as for year-end 2008 and midyear 2009 contributions reports (LD-203 reports), and finally for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House—both of which have key roles in the lobbying disclosure process—although we met with officials from each office, and they provided us with general background information at our request and detailed information on data processing procedures. To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a systematic random sample of 134 LD-2 reports. We excluded reports with no income or with income and expenses less than $5,000 from our sampling frame and drew our sample from 53,756 activity reports filed for the last quarter of 2008 and the first three quarters of 2009 available in the public House database, as of our final download date for each quarter. There are 3 LD-2 reports in the total sample that indicated “no lobbying activity” but listed lobbying income for the quarter. We conducted reviews of these reports because the income was disclosed in accordance with LDA reporting requirements, but since “no lobbying activity” was indicated, lobbyists were not required to provide information for all reporting elements on the LD-2 report. Therefore, in certain calculations these 3 reports are excluded from the sample. Our sample is based on a systematic random selection, and it is only one of a large number of samples that we might have drawn. We sorted firms by the number of LD-2 reports they filed and then drew a systematic sample of LD-2 reports to ensure that our sample contained reports from firms of all sizes. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples that we could have drawn. All percentage estimates in this report have 95 percent confidence intervals of within plus or minus 9.7 percentage points of the estimate itself, unless otherwise noted. When estimating compliance with certain of the elements we examined, we base our estimate on a one-sided 95 percent confidence interval to generate a conservative estimate of either the minimum or maximum percentage of reports in the population exhibiting the characteristic. We contacted all the lobbyists and lobbying firms in our sample and asked them to provide support for key elements in their reports, including the amount of income reported for lobbying activities, the amount of expenses reported on lobbying activities, the names of lobbyists who had held covered official positions, the houses of Congress and federal agencies that they lobbied, the names of foreign entities with interest in the client, the names of individuals no longer acting as lobbyists for the client, and the names of any member organizations of a coalition or association that actively participated in lobbying activities on behalf of the client. In addition, we determined whether each individual lobbyist listed on the LD-2 report had filed a semiannual LD-203 report. Prior to interviewing lobbyists about each LD-2 report in our sample, we conducted an open-source search to determine whether each lobbyist listed on the report appeared to have held a covered official position required to be disclosed. For lobbyists registered prior to January 1, 2008, covered official positions held within 2 years of the date of the report must be disclosed; this period was extended to 20 years for lobbyists who registered on or after January 1, 2008. Lobbyists are required to disclose covered official positions on either the client registration (LD-1) or on the first LD-2 report for a specific client, and consequently those who had held covered official positions may have disclosed the information on a LD-2 report filed prior to the report we examined as part of our random sample. To identify likely covered official positions, we examined lobbying firms’ Web sites and conducted an extensive open-source search of Leadership Directories, Who’s Who in American Politics, Carroll’s, and U.S. Newspapers through Nexis and Google for lobbyists’ names and variations on their names. We then asked lobbying firms and organizations about each lobbyist listed on the LD-2 report that we had identified as having a previous covered official position to determine whether the LD-2 report appropriately disclosed covered official positions or whether there was some other acceptable reason for the omission (such as its having been disclosed on an earlier registration or LD-2 report). Despite our rigorous search protocol, it is possible that our search failed to identify omitted reports of covered official positions. Thus, our estimate of the proportion of reports with lobbyists who failed to appropriately disclose covered official positions is a lower-bound estimate of the minimum proportion of reports that failed to report such positions. In addition to examining the content of LD-2 reports, we confirmed whether midyear LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports. To determine if the LDA’s requirement for registrants to file a report in the quarter of registration was met for the fourth quarter of 2008 and the first, second, and third quarters of 2009, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using direct matching and text and pattern matching procedures, we were able to identify matching disclosure reports for 5,489 of the 6,184, or 89 percent, of the newly filed registrations. We first matched reports and registrations using both the registrant and client identification numbers. For reports we could not match by identification number, we also attempted to match reports and registrations by client and registrant name, allowing for variations in the names to accommodate minor misspellings or typos. We could not readily identify matches in the report database for the remaining registrations using electronic means. To assess the accuracy of the LD-203 reports, we analyzed two random samples of LD-203 reports from the 33,500 total LD-203 reports. The first sample contains 100 reports of the 10,928 reports with political contributions and the second contains 100 reports from the 22,572 reports listing no contributions. Each sample contains 50 reports from the year- end 2008 filing period and 50 reports from the midyear 2009 filing period. The samples allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of plus or minus 7.7 percentage points or less, and to within 2.9 percentage points of the estimate when analyzing both samples together. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available the Federal Elections Commission’s (FEC) political contribution database. In our prior report, we interviewed staff at the FEC responsible for administering the database and determined that the data reliability is suitable for the purpose of confirming whether a FEC- reportable disclosure listed on an LD-203 report had, in fact, been reported to the FEC. We compared several factors of contributions reported on both the FEC database and the LD-203 reports, including the number of contributions, the dollar amount of contributions, date contributions were made, and to whom contributions were made. We were able to readily verify the majority of listed contributions using the FEC database. The verification process required text and pattern matching procedures, and we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. Given the lag time between when a lobbyist or organization might make a contribution and when a political action committee (PAC) or campaign might cash or report the contribution, some flexibility had to be built into the analysis when examining the dates of entries. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. For FEC-reportable contributions that could not be readily matched in the FEC database (perhaps as a result of a delays in a PAC’s or campaign’s filing of the contribution or discrepancies between the name on the LD-203 report and the name on the FEC filing), we contacted each lobbyist to ask for documentation of the contribution. In several cases, the contribution reported had not been processed by the campaign or had been refunded to the donor and therefore did not appear in a campaign’s FEC filing. Additionally, we also asked lobbyists to document reports of honorary and meeting expenses that were not reported to the FEC. Lobbyists were able to provide supplementary documentation for most honorary and meeting expenses, as well as the majority of other contributions we asked about, in the form of invoices, acknowledgment letters, and canceled checks or other financial records. We obtained views from lobbyists included in our sample of reports on any challenges to compliance and how the challenges might be addressed. To describe the process used in referring cases to the Office and to provide information on the resources and authorities used by the Office in its role in enforcing compliance with the LDA, we interviewed officials from the Office; obtained information from those involved in the referral process; and obtained data on the number of cases referred, pending, and resolved. Our objectives did not include identifying lobbyists who failed to register and report in accordance with LDA requirements, or whether for those lobbyists that did register and report all lobbying activity or contributions were disclosed. We conducted this performance audit from April 2009 through March 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We used each report’s filing identification number to select our random sample of lobbying disclosure reports (see table 6). Each identification number is linked to a unique pair of registrant and client names. See table 7 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports with contributions. See table 8 for a list of lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions. In addition to the contacts named above, Robert Cramer, Associate General Counsel; Bill Reinsberg, Assistant Director; Shirley Jones, Assistant General Counsel; Crystal Bernard; Amy Bowser; Anna Maria Ortiz; Melanie Papasian; Katrina Taylor; Greg Wilmoth; and Michelle Loutoo Wilson made key contributions to this report. Assisting with lobbyist file reviews and interviews were Vida Awumey, Colleen Candrl, John Dell’Osso, Scott Doubleday, Jessica Drucker, Justin Dunleavy, Robin Ghertner, Ellen Grady, Lois Hanshaw, Jeff Heit, Alison Hoenk, Lina Khan, Alicia Louks, Jeff McDermott, Adam Miles, James Murphy, AJ Stephens, Sabrina Streagle, and Esther Toledo.
The Honest Leadership and Open Government Act of 2007 amended the Lobbying Disclosure Act of 1995 (LDA). This is GAO's third report in response to the LDA's requirement for GAO to annually (1) determine the extent to which lobbyists can demonstrate compliance with the LDA by providing support for information on their registrations and reports, (2) identify challenges and potential improvements to compliance for registered lobbyists, and (3) describe the efforts the U.S. Attorney's Office for the District of Columbia (the Office) has made to improve its enforcement of the LDA. GAO reviewed a random sample of 134 lobbying disclosure reports filed from the fourth quarter of calendar year 2008 through the third quarter of calendar year 2009. GAO also selected two random samples of federal political campaign contributions reports from year-end 2008 through midyear 2009. GAO sampled 100 reports listing contributions and 100 reports listing no contributions. This methodology allowed GAO to generalize to the population of 53,756 disclosure reports, 10,928 contributions reports, and 22,572 reports with no contributions. GAO also met with officials from the Office regarding efforts to focus resources on lobbyists who fail to comply with the LDA. While there are no specific requirements for lobbyists to create or maintain documentation related to disclosure reports they file under the LDA, GAO's review showed that lobbyists were generally able to provide documentation, although in varying degrees, to support items in their disclosure reports. This finding is similar to GAO's results from last year's review. For income and expenses, two key elements of the reports, GAO estimates that lobbyists could provide written documentation for approximately 89 percent of the disclosure reports. After GAO's review, 15 lobbyists stated that they planned to amend their disclosure reports to make corrections on one or more data elements. As of March 18, 2010, 7 of the 15 amended their disclosure reports to make these corrections. For political contribution reports, GAO estimates that 82 percent of the reports listing contributions could be supported by Federal Elections Commission (FEC) data or documentation provided by lobbyists. Among reports with no contributions listed, an estimated minimum of 3 percent of reports omitted one or more contributions that should have been reported. All of the lobbyists said that they did not report the information listed in the FEC database because of an oversight and plan to amend their reports. The majority of lobbyists who newly registered with the Secretary of the Senate and Clerk of the House of Representatives in the last quarter of 2008 and first three quarters of 2009 filed required disclosure reports for the period. GAO could not identify corresponding reports on file for lobbying activity for about 11 percent of the registrants, likely because either reports were not filed or the reports that were filed contained information, such as client names, that did not match the registrations. The Secretary of the Senate and Clerk of the House routinely review the completeness of registrations and reports and follow up with lobbyists. Most lobbyists felt that existing guidance for filing required registrations and reports was sufficient. However, GAO's review of documentation and lobbyists' statements indicates some opportunities to strengthen lobbyists' understanding of the requirements. The Secretary of the Senate and Clerk of the House update guidance periodically to respond to issues and comments as they arise. In response to an earlier GAO recommendation, the Office developed a system to help monitor and track enforcement efforts. The Office continues to refine the system to meet the requirements conveyed in GAO's recommendation. To enforce compliance, the Office primarily focuses on sending letters to lobbyists who potentially violated the LDA by not filing disclosure reports. No civil actions or settlements with lobbyists have been pursued by the Office since 2005, although it is following up on hundreds of referrals each year.
Medicare’s home health care benefit enables beneficiaries with post-acute- care needs and chronic conditions to receive certain skilled nursing, therapy, and aide services in their homes rather than in other settings. To qualify for Medicare’s home health benefit, a beneficiary must be confined to his or her residence (“homebound”), must be under a physician’s care,and must require physical therapy, speech therapy, continued occupational therapy, or skilled nursing on an intermittent basis. Beneficiaries are not liable for any coinsurance or deductibles for these services and may receive care as long as they meet the eligibility criteria. Until recently, Medicare has reimbursed HHAs for their costs, subject to limits, for services they provide to the program’s beneficiaries. Between 1990 and 1997, Medicare expenditures for home health services went up three times faster than spending for the program as a whole. This rapid rise has been attributed to many factors, including a liberalization of home health benefit criteria and a lack of sufficient controls to protect the program from potential billing practice abuse. In combination, these factors created conditions where providers could deliver more services than necessary to beneficiaries in order to increase their revenues. In response to these problems, the Balanced Budget Act of 1997 required, by October 1, 1999, the implementation of a new home health PPS, and, until then, the implementation of an interim payment system (IPS) to slow spending growth. The IPS incorporated tighter per-visit cost limits than previously in place and subjected each agency to an annual Medicare revenue cap (based on a per-beneficiary amount and the number of patients it served). The home health PPS, which replaced the IPS on October 1, 2000, is designed to align payments with anticipated service needs. HHAs now receive a single payment for each 60-day episode of care for a Medicare beneficiary. The base payment is adjusted to reflect patient characteristics that have been shown to affect service use. For fiscal year 2001, the base amount per episode has been set at $2,115, but payment rates range from about $1,100 to nearly $6,000, depending on the functional and clinical severity of each beneficiary. Each episode payment is adjusted for differences in labor costs across geographic areas, and certain extremely high cost episodes receive outlier payments. Once the payment is determined, the amount of service provided to that beneficiary does not change the amount of reimbursement. In order to qualify as providers eligible to bill Medicare for home health services, HHAs have to comply with the program’s conditions of participation. These standards seek to ensure that HHAs have the appropriate staff, policies, procedures, medical records, and operational practices to deliver acceptable quality care. HCFA contracts with state survey and certification agencies to oversee the adherence of HHAs in their states with these standards. However, our previous work has shown that state agencies’ reviews of HHAs to be certified to provide Medicare services did little to ensure quality care and that there was almost no oversight of the actual care provided to home health patients. In the Omnibus Budget Reconciliation Act of 1987, Congress mandated that HCFA develop a standardized patient assessment instrument to assist in monitoring HHAs. HCFA used information from years of research and demonstrations in the development of OASIS, which contains 79 demographic, clinical, and functional data items for assessing patients and measuring outcomes. (The process of developing and testing OASIS is described in app. II.) In January 1999, HCFA issued final rules requiring HHAs to conduct comprehensive patient assessments incorporating the OASIS data elements and electronically report the OASIS data collected. The requirement covers most private pay as well as Medicare and Medicaid patients. Collection of the information relies on both observation of patient function by a nurse or therapist and patient responses. For each patient receiving skilled care, the data are generally collected at the initial visit, every 60 days thereafter for the duration of treatment, and at discharge. HHAs report the data to their state survey and certification agencies, which then report the data to a central repository maintained by HCFA. Concerns regarding the privacy of OASIS information were expressed shortly after HCFA issued its rules on OASIS data collection and reporting in January 1999. Some privacy advocates expressed concerns that some questions were irrelevant or delved too deeply into the personal lives of patients. They cited the mental status questions, including one that asks about depressive feelings reported or observed in the patient, as well as a question regarding financial factors that could limit the patient’s ability to meet his or her own basic health needs. HHAs, advocacy groups, and others suggested that patient identifiers be removed from OASIS data before transmission to HCFA or that HCFA not require OASIS data to be reported on non-Medicare/Medicaid patients. In the spring of 1999, these concerns led HCFA to postpone the effective date of OASIS reporting until it reviewed the privacy issues involved. The outcome of this review was HCFA’s decision to leave the OASIS assessment instrument intact. HHAs would continue to be required to collect all OASIS information on all patients, because HCFA believes it is valuable to HHAs in patient assessments and care planning. However, HCFA put limits on the transmission of certain OASIS data elements, and it has postponed data reporting, but not collecting, for non- Medicare/Medicaid patients. Under the new conditions of participation effective July 1999, HHAs participating in Medicare must (1) incorporate OASIS data items into the assessment process for Medicare, Medicaid, and private pay patients, (2) electronically transmit accurate OASIS data to the state survey agency or HCFA OASIS contractor, and (3) maintain the privacy of their OASIS data. The OASIS data instrument serves both to monitor home health care quality and to adjust payments to account for differences in patient characteristics. To enhance quality of care, HCFA plans to use the OASIS data to guide its oversight of HHA activities, to provide each HHA with information about its patients’ outcomes compared to those of other HHAs, and to guide the selection of HHAs by patients and physicians. OASIS data affect payments to HHAs both in determining the payment made for current patients and in providing data to analyze possible modifications to the current payment system. HCFA proposes to use OASIS data to promote higher-quality home health care by (1) guiding the oversight of HHAs performed by state survey and certification agencies, (2) giving HHAs comparative information that they can use to improve their own practices, and (3) providing information to patients and referring physicians that will help them to choose HHAs that achieve better outcomes. Although none of these approaches has been implemented, planning for the first two is under way, and the third is to be developed in the future. HCFA intends to use OASIS data to strengthen its oversight of state survey agency monitoring of HHA outcomes. It requires the state survey agencies to examine the OASIS data in preparation for surveys of individual HHAs.Survey agencies have begun checking the OASIS data submitted by HHAs in their states to ensure HHA compliance with OASIS reporting requirements. HCFA expects the survey staff to review OASIS-based reports to identify indicators of potential concern (such as high rates of infection) that would warrant further investigation and ongoing monitoring. When HCFA mandated that HHAs begin collecting OASIS data, it emphasized that this requirement was intended to set in motion a process of continuous quality improvement within each HHA. Based on the OASIS data collected, each HHA will be granted electronic access to customized reports displaying its own patients’ outcomes in relation to those of home health patients nationally, with statistical adjustments to take account of the clinical characteristics of the patients served by that agency. The HHA will be able to examine outcomes for specific types of care (such as wound care and pain management) and types of patients (such as those with diabetes or those recovering from surgery). This way, each agency will be able to assess its performance over time and compare it to national benchmarks. These reports will enable HHAs to identify areas where their performance was suboptimal and thus provide a basis for planning initiatives to improve patient health status. The first reports, based on the OASIS data that have been collected nationwide since July 1999, show individual HHAs the demographic and clinical profiles of their patient population and adverse events. These reports are expected to be available by late January 2001, followed by detailed risk-adjusted outcome reports in 2002. Before the reports are made electronically accessible to the HHAs, OASIS education coordinators in each state will provide training and technical assistance for HHAs on how to analyze and act on the information. In addition, HCFA has funded a 2-year pilot project in five states to explore the feasibility of using peer review organizations to help HHAs in interpreting their reports and developing from them effective quality improvement initiatives. Another way HCFA plans to use OASIS data to promote quality is by providing information to assist physicians and patients in selecting HHAs. HCFA expects that making such comparative information on outcomes publicly available could encourage HHAs to compete for patients on the basis of the quality of care they provide. HCFA has recently initiated planning on how to release this information. The first step will be to evaluate alternative approaches for presenting and distributing these data to the public. One current example of HCFA’s efforts to share comparative information on Medicare providers is its “Nursing Home Compare” Web site. This site has information on facility and resident characteristics of nursing facilities as well as deficiencies reported in past survey inspections, though not on patient outcomes. A second major use of OASIS data collection is payment-related. Under the home health PPS, HHAs receive a specified payment per beneficiary for each 60-day episode of care. HCFA uses OASIS data to assign patients to one of 80 relative payment levels, called home health resource groups. This assignment is based on 23 patient descriptors from the OASIS assessment that measure clinical condition, functional status, and service utilization. Each payment group is assigned a relative weight that reflects the cost of the average beneficiary in that category relative to all home health care users. In addition to providing information necessary to implement the home health PPS in its current form, OASIS data will assist HCFA in (1) monitoring the effects of prospective payment on quality of care and (2) developing potential refinements in the formulas used to determine payments. Because of the change from cost-based reimbursement to prospectively determined payments for each episode of care, PPS creates a financial incentive to limit services per episode and increase the number of episodes billed. HCFA has pledged to undertake monitoring of OASIS data, along with data from other monitoring systems, as part of a surveillance system designed to assess the short- and long-term effects of PPS. For example, OASIS data should enable HCFA to detect unfavorable trends in outcomes for home health care patients, such as delayed or diminished recovery from a stroke. Questions have been raised about the potential vulnerability of the OASIS data to manipulation intended to maximize provider payments. HHAs could benefit financially from making their patients appear as sick and functionally impaired as possible when initially assessed, in order to be assigned a higher payment group. HCFA was aware of the risk of “gaming” and sought to minimize this risk when it selected the specific OASIS data elements used to assign patients to different PPS payment groups. The Medicare Payment Advisory Commission has nonetheless expressed concern that the OASIS assessments submitted to HCFA will reflect these financial incentives to exaggerate patient severity at admission. To address concerns about data quality, HCFA has undertaken an accuracy demonstration program. This program will evaluate alternative methods to ensure the accuracy of the OASIS data submitted by HHAs. In addition, state surveyors will check a sample of patient assessments against medical records. Medicare fiscal intermediaries—contractors to HCFA that process HHA claims for payment—are also expected to use OASIS data. The information will help them decide which HHAs to include in focused medical reviews that determine the appropriateness of payment of individual claims for home health services provided to beneficiaries. One aspect of this review strategy involves determining whether OASIS information is supported by documentation in the medical record. If the intermediaries determine that the OASIS data are not appropriate, they will adjust the payment grouping accordingly. HCFA has sought to limit the amount of OASIS data collected to that needed for monitoring quality and payment purposes. The research group that developed OASIS under contract to HCFA—the University of Colorado Center for Health Services and Policy Research (CHSPR)— explicitly set out to identify the key data elements that would enable HHAs to measure their outcomes while minimizing the data collection burdens. CHSPR identified a set of 73 core data items needed both to compute quality indicators and to risk-adjust the outcomes reported. (See app. II for more details on this process.) An advisory group appointed by HCFA reviewed CHSPR’s core data set. This Standard Assessment work group was made up of 13 members, including HHA administrators, practicing clinicians, a clinical assessment expert, a state official, and representatives of industry and professional organizations. It recommended that HCFA adopt the core data set, with the addition of several more elements. The feasibility of collecting and using OASIS data was subsequently tested in two demonstration studies that documented improved outcomes for the participating HHAs. Nearly all the OASIS data elements that emerged from this process will be used to generate the specific outcome measures presented in the HHA customized quality improvement reports. Six of the 79 items currently have no intended use. Four of these, described as “potential risk adjustment factors,” assess environmental and safety issues in the patient’s home, and another item relates to the patient’s financial ability to meet treatment needs. All four were among those added to the data set at the behest of HCFA’s advisory work group. Concerns were subsequently raised by some privacy advocates about the sensitivity of some of these data elements. The financial question in particular was so sensitive that HCFA decided to exclude it from the data transmitted by the HHAs to the states. However, HCFA maintained the obligation of the HHAs to obtain this information for all home health patients. HCFA also required the HHAs to collect, but not initially transmit, OASIS information on patients receiving skilled care who were not covered by Medicare or Medicaid. HCFA has stated that it is important to collect OASIS data on patients served by HHAs from all payor sources in order to evaluate the quality of care provided. In addition, HHS must ensure that the conditions of participation are adequate to protect all individuals under the care of the HHA. Although HCFA has developed techniques for masking the identity of non-Medicare/Medicaid patients, it has postponed having these data transmitted to the state repositories. HCFA officials told us that the notice to begin transmission of these data could be published in the spring of 2001. HCFA will not, however, require retroactive transmission of the OASIS data collected from non-Medicare/Medicaid patients. Instead, HCFA will notify HHAs to transmit only current assessments on non-Medicare/Medicaid patients. Incorporating the OASIS data instrument into comprehensive patient assessments has increased the consistency of patient data collected by the HHAs. In contrast to HCFA’s expectation that HHAs would take no more time to conduct start-of-care visits using OASIS, nearly all respondents in our survey of HHAs estimated that start-of-care visits take longer than they did before. These HHAs also reported that additional time is needed to check and edit collected OASIS data, enter and transmit the information electronically, and train new staff. The initiation of home health care requires two separate but related steps: performing a comprehensive assessment of the patient’s condition and, based on that assessment, devising the patient’s plan of care. Before the OASIS mandate took effect, Medicare rules required HHAs to perform both of these steps, but called for specific documentation for the plan of care only. Now they require the collection and reporting of the OASIS assessment data for each patient as well as plan-of-care documentation. Thus, what constitutes a comprehensive assessment under the long- standing requirement is now more clearly defined for HHAs. According to HHA and state officials, the assessments that HHAs performed in the past varied in both scope and format. They told us that while some agencies may have conducted thorough evaluations of their patients, others performed more cursory or narrowly focused assessments. Likewise, HHA documentation practices could vary substantially. For example, some agencies wrote narrative descriptions of the patient’s condition, and others may have developed more structured instruments with short answers or checklists. The effect of the OASIS mandate on each HHA depended on how different its previous practices in conducting and documenting patient assessments were from the current OASIS data collection and reporting requirements. HCFA has cited data from selected HHAs in an OASIS demonstration project to support its expectation that OASIS’ standardized, multiple- choice format would take no more time to complete than prior documentation of assessments, which typically involved individual narratives. However, data we collected through interviews and a survey of HHAs suggest that OASIS did result in an increase in time spent in initial care visits and additional time for new tasks associated with transmission of data. To provide a basis for cost estimates, as required by regulation, HCFA asked CHSPR to assess the OASIS data collection costs on HHAs, in particular the additional staff time required. Of special concern was the start-of-care comprehensive assessment, when clinicians would have to obtain answers to all the OASIS questions from a new patient for the first time. CHSPR gathered data from 10 agencies participating in a HCFA-sponsored study. Overall, CHSPR found that the median total time taken by these HHAs for start-of-care visits using OASIS was 150 minutes, a few minutes less than start-of-care visits without OASIS. In a second study, Abt Associates measured the time taken for start-of-care visits with a longer version of OASIS, but recorded only the time spent in the patient’s home and not time spent on associated paperwork performed elsewhere. This study of more than 20,000 visits found that start-of-care visits using OASIS required a median of 90 minutes. However, there were no comparable data from start-of-care visits without OASIS. In contrast to the CHSPR study, officials of the 32 agencies responding to our survey of a representative sample of Medicare HHAs estimated that start-of-care visits incorporating OASIS assessments did take more time than those conducted prior to OASIS. The median total time estimated to complete start-of-care visits with OASIS was 150 minutes, matching the figure obtained in the CHSPR study. However, HHAs reported that this amount represented a median increase of 40 minutes relative to time for start-of-care visits prior to OASIS. In each of these studies, data from individual HHAs on the amount of time required for start-of-care visits with OASIS varied widely. This variation may reflect differences in how responding HHAs have integrated the new assessment instrument and how it is administered in the patient’s home. Many of the HHAs we interviewed told us that they had followed HCFA’s instructions to replace items requesting similar information on their patient assessment forms with OASIS items. However, one agency had not yet completed this task, requiring the nurse conducting an initial visit to complete the OASIS form separately. To varying degrees, clinicians administer the OASIS assessment through a combination of questioning, examining the patient, and observing patient behavior and home environment. HHAs also have to perform new tasks related to the submission of OASIS data to the state repositories in electronic form. Both the mandate for HHAs to collect and report OASIS data and the transition to prospective payment based on OASIS information have heightened the concern of HHAs with the validity and completeness of these data. To help ensure that patient assessments are correctly recorded in a form that HCFA’s data repositories will accept, HHAs need to review the data as they proceed from initial recording by the clinician to electronic transmission to the state repository. The steps in this process include the following: Heightened supervisory review of the assessment forms completed by the clinician performing the assessment. Entering, rechecking, and correcting OASIS data from paper records into the computerized records. Batching and then electronically transmitting the data to a centralized state data repository. (The transmission protocol established by HCFA rejects data that do not pass tests for consistency and validity. Any data rejected have to be analyzed, corrected, and resubmitted.) The HHAs we surveyed estimated that these steps require approximately 50 minutes per OASIS assessment. HHAs must also commit resources to training newly hired clinicians on OASIS protocols. Eighty-four percent of our survey respondents said they provide training for newly hired staff, with modules focused specifically on OASIS data collection and documentation. Those HHAs offering OASIS- related training reported providing a median of 8 hours to new staff. However, how much additional time is due specifically to the OASIS requirement is not clear, because OASIS-related training could substitute for some prior assessment-related training as well as add new elements. Many HHAs may find that their additional OASIS-related costs are offset by payments they receive under the new payment system. We recently reported that PPS payment rates are based on 1998 rates of home health care utilization, which have since declined. Therefore, they are likely to be generous in comparison with current use patterns. In our view, the episode payments could provide an ample cushion for many agencies, which can be used to offset the costs associated with the OASIS mandate. In addition, Congress and HCFA have taken several actions to assist HHAs in complying with OASIS mandates. For example, for each Medicare beneficiary served from October 1, 1999, to September 30, 2000, Congress provided HHAs with $10 to help defray OASIS costs. Also, the prospective payment base was increased by $4.32 per 60-day episode as an ongoing adjustment for OASIS reporting costs. HCFA has taken other steps to reduce the costs imposed on HHAs by the OASIS mandate. These include the development and distribution free-of-charge to HHAs of a software program (called HAVEN) to use in transmitting the OASIS data to state agencies. HCFA has also provided toll-free telephone lines to the HHAs for this data transmission. HCFA has instituted several policies and procedures to protect OASIS data from unauthorized access, conceal the identity of patients, and ensure that recipients of OASIS information protect confidentiality. HCFA officials believe that these actions provide reasonable assurance that the privacy of OASIS information will not be compromised. As we previously reported, ensuring that users of confidential health data, including OASIS data, comply with required privacy procedures is also a necessary safeguard. As with all patient medical data, HHAs must ensure the privacy of OASIS information. Even before OASIS was mandated, HHAs participating in Medicare had to develop policies and procedures to maintain the confidentiality of patient information. Several state surveyors we interviewed said that, as part of their inspections, surveyors examine how all patient records, including the OASIS forms, are maintained in the HHA’s administrative offices. The new privacy requirements under Medicare conditions of participation call for the HHA (and any agent acting on its behalf, such as a software vendor) to ensure the confidentiality of all patient information contained in the clinical record, including OASIS data. This requirement also prohibits the HHA and its agents from releasing patient-identifiable OASIS information to the public. In addition, HHAs are required to provide beneficiaries and other patients with an OASIS statement of privacy rights upon admission to the HHA. These OASIS privacy notices inform patients about their rights relating to their personal health information, in language that is intended to be clear and easy to understand. HCFA reported that consumer testing of Medicare beneficiaries indicated that they understood that the notice was informing them about their rights relating to their personal health care information. HCFA has also implemented data transmission and storage policies to protect the information while it is in transit to, and being stored at, state agencies and HCFA. These mechanisms include required use of a secure communications network to protect the data while in transit, as well as technology designed to make information unintelligible should unauthorized persons access it. Further, HCFA requires that certain patient identifiers associated with non-Medicare/Medicaid patients be “masked” so that state agencies and HCFA will be unable to determine the identity of these individuals. Although HCFA has developed techniques for masking the identity of patients, it has postponed having these data transmitted to the state repositories. Similarly, HHAs are not to transmit the response to the question as to whether the patient has sufficient financial resources to pay for medicine, food, and other essentials. (Details about HCFA’s data transmission and storage protections are discussed in app. III.) Once the OASIS information is transmitted to HCFA, it is maintained in a national repository, where specific disclosure policies apply. HCFA is bound by the requirements of the federal Privacy Act (P.L. 93-579) in protecting the confidentiality of all health information on beneficiaries, including OASIS information. The Privacy Act allows the disclosure of information without an individual’s consent for “routine uses” that are consistent with the purposes for which the information was originally collected. The routine uses of OASIS information include aiding in the administration of the HHA survey and certification process. Persons who request OASIS data, such as researchers and members of peer review organizations, must agree to protect the confidentiality of the information as part of a written “data use agreement.” Data use agreements must also be in place between HCFA regional offices and the state’s Medicaid agency before the state’s OASIS agency can release the information to the Medicaid agency. In addition, HCFA officials told us that it is departmental policy to release only the “minimum necessary” data to meet the requester’s purpose. HCFA believes that the policies and procedures it currently has in place provide it with reasonable assurance that the confidentiality of any OASIS information released to approved entities will be maintained. However, in a July 1999 report we identified several weaknesses in HCFA’s privacy practices that could potentially compromise the confidentiality of health information on Medicare beneficiaries. Although we found that HCFA’s policies and procedures regarding disclosure of personally identifiable information were generally consistent with the provisions of the Privacy Act, weaknesses in the implementation of these policies raised concerns. For example, we found that HCFA was not always clearly informing beneficiaries of the purposes for which their information may be disclosed, as required by the Privacy Act. We also found that HCFA did not routinely monitor contractors and others, such as researchers, who use personally identifiable Medicare information. We recommended that HCFA take steps to address these weaknesses. HCFA has taken steps regarding protection of its OASIS data. As stated above, HCFA has required HHAs to provide both Medicare/Medicaid and private pay patients with OASIS privacy notices. The beneficiary notice lists the patient’s primary rights and gives the patient information as required by the Privacy Act, such as HCFA’s authority for collecting OASIS data and the principal purposes for which the information would be routinely used. However, based on our discussions with HCFA and state officials, there appears to be little or no oversight of how effectively the state agencies and third parties are maintaining the privacy of OASIS information. Even though HCFA requires state agencies to ensure that access to OASIS data is restricted and that recipients of OASIS information protect its confidentiality, HCFA officials told us that they do not inspect the privacy safeguards in place at the state agency. These officials also indicated that HCFA still has no system in place to monitor whether parties subject to data use agreements are complying with their requirements. Without an adequate monitoring system in place, HCFA could be hampered in its attempts to prevent the occurrence of problems and provide timely information and corrective action for any that might occur. With the implementation of a prospective payment system, efforts to protect patients from potential underprovision of care and to hold HHAs accountable are essential. Instituting the collection and reporting of OASIS data is an important step in that direction. The use of OASIS data enhances consistency in the performance and documentation of patient assessments for home health services. As a result, information on patient outcomes will become available for the first time. Collecting such data is not without its costs. To varying degrees, the requirement to collect OASIS data on all home health patients increases the amount of staff time devoted to collecting and reporting patient assessment information. HHAs have been compensated for some of these costs through adjustments to their payment rates. Moreover, because PPS episode payment rates are based on historically high utilization levels, which have since declined, these rates should allow the completion of OASIS assessments. Protecting the privacy of home health care patients is also important. HCFA has made progress in this area by enhancing protections in the collection and transmission of the OASIS data. The effectiveness of these policies and procedures will depend on how well they are implemented. We provided a draft of this report to HCFA for review. It its comments, HCFA agreed with our findings and conclusions and elaborated on several points addressed in the report. HCFA continues to believe that, once HHA staff learn how to implement OASIS, the amount of time it takes to conduct a thorough patient assessment will decline. The agency contends that, as experience with OASIS is gained, HHAs will be better able to integrate use of the instrument into their ongoing administrative and clinical activities. In addressing the use of OASIS for payment purposes, HCFA considers the OASIS data elements to be crucial to refining payment rates, and if data collection were limited to those elements currently needed for payment, its ability to refine PPS in the future would be constrained. Regarding our discussion of data confidentiality protections, HCFA highlighted several specific steps it has taken to ensure patient privacy. HCFA’s comments appear in appendix IV. The agency made technical comments that we incorporated where applicable. We are sending copies of this report to the Honorable Robert A. Berenson, Acting Deputy Administrator of HCFA, and others who are interested. We will also make copies available to others on request. Rosamond Katz, Eric Peterson, and Victoria M. Smith developed the information contained in this report. Please contact me at (202) 512-7119 if you or your staffs have any questions. To gain the perspective of a representative segment of HHAs with respect to the cost and privacy implications of the OASIS mandate, we surveyed a random sample of HHAs. This appendix describes how the survey was conducted and discusses the strengths and limitations of the information provided. Determining how much it has cost HHAs to implement HCFA’s mandate to collect and report OASIS data on individual patients is complex, for three main reasons. First, the OASIS mandate could lead to additional costs in many different areas, including additional staff time to perform a variety of tasks. HCFA required that the OASIS items be integrated with other assessment forms, which could involve the development of both new forms and new procedures to complete them. The process of encoding and transmitting the OASIS data electronically led many HHAs to expand their use of computers, which could have called in turn for capital investments and the recruitment of new staff. Second, the home health care industry was undergoing radical change. The ongoing transition from cost-based reimbursement to prospective payment fundamentally altered the financial circumstances and incentives of many agencies. The characteristics of patients seeking and receiving home health care may also have changed as a result. Staff recruitment, training, computerization, and revamped procedures were all affected by these market and payment-related changes as well, making it very difficult to isolate an independent effect from the OASIS mandate. Third, no cost data specifically linked to patient assessment activities were systematically and consistently maintained either before or after the implementation of the OASIS mandate. Instead, such activities are integrated into the clinical and administrative functions of HHAs. Thus any attempt to estimate the specific effect of OASIS on costs necessarily would involve some reconstruction of such data after the fact. Our survey of HHAs was designed with these factors in mind. Rather than attempt to obtain a comprehensive accounting of all possible OASIS- related costs, we focused on the additional time spent on four major activities that appeared from our preliminary interviews with HHA officials to have had a substantial effect on total costs: Clinicians’ total time for the start-of-care visit, Supervisors’ time reviewing and monitoring patient assessment data Time for training new hires on OASIS, and Time entering and transmitting OASIS data electronically. We asked the executive directors of the HHAs we surveyed to provide both current and pre-OASIS time figures from more than a year ago. It is common for agencies to maintain logs with time spent at different types of visits. While most HHAs said they were able to draw on relevant and specific recorded data, others provided rough estimates. However variable in quality, these data recorded by the HHAs for their own purposes represent the best available data we found for estimating the cost implications of the OASIS mandate. To select our sample, we used 1999 data extracted from HCFA’s Provider of Service File and associated claims data. We started with a list of each HHA that had been paid for at least one Medicare home health visit in 1999. We excluded those agencies that had not begun providing home care under Medicare prior to January 1, 1999, and those that served fewer than 15 Medicare patients in 1999. We then selected a simple random sample from the remaining agencies. Thus, the sample represents the universe of HHAs, not patients. Although our sample was not stratified, we did take precautions to ensure that the agencies in the sample did not have a highly skewed distribution along several major dimensions. Specifically, we observed the distribution among all the HHAs in our sampling universe for five characteristics— caseload size (number of Medicare patients treated annually), urban/rural, geographic region, organizational affiliation (Visiting Nurse Association (VNA), facility-based, freestanding), and tax status (nonprofit, for-profit, government). (See table 1.) We then took a series of independent random samples of 50 agencies each. (Every agency had an equal chance of being selected for each of these samples.) We used the sample that best matched the distribution found in the sampling universe. We received usable responses from 32 HHAs. Three of the 50 surveyed had ceased to operate as separate agencies, either by going out of business or by merging with another entity. That gave us an effective response rate of 68 percent (32 out of 47). As shown above, the respondent group generally matched the characteristics of the sampling universe and the sample. The main exception was an underrepresentation of the Northeast region and overrepresentation of the West. Facility-based agencies were also somewhat overrepresented among the respondents compared to freestanding HHAs. Because our sample was randomly selected, it provides unbiased estimates of the results we would have received had we been able to survey the entire universe. Still, a sample of 50 (with 32 respondents) is likely to have considerable sampling error compared to that of a larger sample. The standard errors and 95 percent confidence interval for the main survey items presented in the report are provided below. These confidence intervals indicate the range within which there is a 95 percent chance the mean would fall if the full universe had been surveyed. They therefore show that there is imprecision in the estimates of the means due to the relatively small size of our sample. For example, the estimate for the mean time required for start-of-care visits using OASIS was 143 minutes, but the 95-percent confidence interval for that estimate ranged from 125 minutes to 160. In the text of the report we chose to present medians rather than means, since they are less sensitive to outliers. Table 2 below shows both means and medians. Apart from the imprecision introduced by sampling considerations, numerous factors are likely to have influenced the estimates provided to us by the HHAs we surveyed: The surveyed HHAs varied in the extent to which they relied on written records to calculate the amount of time taken for start-of-care visits pre- and post-OASIS. We asked them to draw on such records if possible, but available records varied from one agency to another. To the extent that the respondents believed that higher estimates of time spent on post-OASIS visits might promote more generous payments for home health care under Medicare, there could be an upward bias in the figures provided. The comparison of current visit times with those preceding the OASIS mandate incorporates the effects of all the changes that affected home health care over that period, such as shifts in payment methods and amounts by Medicare and other payers, fluctuations in market demand for nursing and therapist staff, and the sharp decline in the number of agencies providing care (following an earlier period of rapid growth), as well as OASIS. OASIS was developed by the University of Colorado Center for Health Services and Policy Research (CHSPR) for the purpose of measuring home health care outcomes. This effort involved first a review of the existing approaches for assessing the quality of home health care, including both a literature review and consultations with clinical experts. A series of studies examined the data that could be obtained from clinical records as well as secondary data sources such as Medicare claims and plan-of- treatment forms. The subsequent empirical testing of candidate measures collected data from 3,427 Medicare and non-Medicare patients treated in 49 HHAs. The data elements collected were tested for their statistical reliability. The measures based on those data elements were assessed on a range of criteria, including their clinical meaningfulness (as judged by clinical review panels), coverage across multiple dimensions of health status, minimization of redundancy, and ability to detect differences among HHAs. At the end of this process, CHSPR arrived at a set of 73 core data items needed both to compute core quality indices and to adjust the outcomes reported for different agencies to take account of relevant differences in the circumstances of the patients that they treat (that is, risk adjustment). In late 1994, HCFA convened a 13-member Standard Assessment work group made up of HHA administrators, practicing clinicians, a clinical assessment expert, a state official, and representatives of industry and professional organizations. Its charge was to assess CHSPR’s core data items for inclusion in a patient assessment instrument to be mandated under revised conditions of participation for HHAs under the Medicare program. This group recommended that HCFA adopt the core data items with modifications. Specifically, they suggested that HCFA expand three of the data elements, convert one item to three more detailed items, and add eight new items, including cognitive functions, financial ability to meet treatment needs, and hearing, speech, and vision capabilities. This expanded core data set, now named OASIS, was then tested in several demonstration studies conducted by CHSPR. Beginning in 1995, a group of 50 HHAs nationwide, plus another 67 in New York State, were selected to see whether HHAs could in fact use OASIS assessments to identify dimensions of care with suboptimal outcomes and then take measures to improve those outcomes. Empirical testing of the OASIS data and measures continued, including a second round of reliability assessments. In addition, a demonstration conducted by Abt Associates also collected OASIS data elements for the purpose of identifying appropriate criteria for setting rates in a home health PPS. A separate set of reliability assessments took place as part of this study. The interim results published by Abt and CHSPR indicate that the OASIS data set is generally reliable, although a few data items had poor reliability according to the standards adopted in these studies. Both CHSPR and Abt are planning to report additional reliability results based on larger numbers of patients, but these findings are not yet available. The Medicare Quality Assurance Demonstration and the New York State Outcome Based Quality Improvement Demonstration showed that HHAs could use OASIS data to improve home health care outcomes. Based on their initial OASIS results, the HHAs examined their processes of care in order to develop plans of action designed to enhance two specific outcomes: reducing the hospitalization rate of their patients, and another outcome selected by each participating HHA. Overall, the rate of hospitalization among patients treated by the Medicare Quality Assurance Demonstration HHAs declined in one year from 31.4 percent to 28.3 percent, a decrease of 10 percent. The corresponding decline in hospitalization rates among patients in the New York State Demonstration HHAs was 9 percent. However, no similar decrease in hospitalizations was observed for home health patients nationally during this period. Thus HCFA concluded that outcome-based quality improvement initiatives adopted by the demonstration HHAs were effective in achieving their stated objective. In protecting the confidentiality of health information of its beneficiaries, HCFA’s activities, like those of other federal agencies, are governed by the Privacy Act of 1974 (5 U.S.C. 552a, P.L. 93-579). The Privacy Act requires that agencies limit their maintenance of individually identifiable records to those that are relevant and necessary to accomplish an agency’s purpose. Federal agencies store personally identifiable information in systems of records—a group of records, under the control of a federal agency, from which information can be retrieved by the name of an individual or an identifier such as a number assigned to the individual. As of November 2000, HCFA had 47 systems of records related directly to Medicare beneficiaries containing information stored in both electronic and paper form. HCFA stores personally identifiable data on a Medicare beneficiary’s enrollment and entitlement to benefits; demographic information such as age, race, ethnicity, and language preference; and diagnoses and utilization of medical services. The Privacy Act generally prohibits the disclosure of individuals’ records without their consent. However, it allows the disclosure of information without an individual’s consent under 12 circumstances called conditions of disclosure, such as disclosure by a federal agency to its employees based on their need for records to perform their duties. Another condition of disclosure allows an agency to establish “routine uses” that the agency has determined to be compatible with the purposes for which the information was collected. In accordance with the requirements of the Privacy Act, HCFA issued a notice in June 1999 that it was establishing a new system of records to contain OASIS data. In this notice, HCFA outlined several precautionary measures it was taking to minimize risks of unauthorized disclosure. For example, HCFA stated that it will collect only that information necessary to perform the functions for which it plans to use the OASIS data, such as creating patient outcome reports for HHAs. Similarly, HCFA said it will disclose only the minimum amount of OASIS data necessary to achieve purposes compatible with these functions. All patient-specific information is to be kept confidential, with access limited to ensure that privacy remains protected. Also included in the notice are the details regarding the scope of the data collected and HCFA’s policies and procedures regarding disclosures for the following routine uses of OASIS data: Aid in the administration of the survey and certification of Enable regulators to provide HHAs with data for their internal quality Support agencies of the state government to determine the overall effectiveness and quality of HHA services provided in the state, Aid in the administration of federal and state HHA programs within the Monitor the continuity of care for patients who reside temporarily outside Support regulatory, reimbursement, and policy functions, Support constituent requests made to a congressional representative, Support litigation involving the agency, and Support research projects related to disease prevention or health maintenance. In its notice, HCFA listed seven entities who may receive disclosures of OASIS data under HCFA’s routine use exception: (1) Department of Justice, court or adjudicatory body, (2) agency contractors or consultants who have been engaged by the agency to assist in the performance of a service related to the OASIS system of records and who need to have access to the records in order to perform the activity, (3) an agency of a state government, or established by state law, (4) another federal or state agency (including state survey agencies and state Medicaid agencies) for contributing to the accuracy of HCFA’s health insurance operations and/or for supporting state agencies in the evaluations and monitoring of care provided by HHAs, (5) a peer review organization, (6) an individual or organization for research purposes, and (7) a member of Congress or congressional staff member in response to an inquiry of the congressional office made at the written request of the constituent about whom the record is maintained. In addition to Privacy Act protections, beneficiaries are afforded confidentiality protections under the HHA conditions of Medicare participation. For example, HHAs and their agents cannot release OASIS information that identifies particular patients to the public. Additionally, patients have the following rights: (1) the right to know why the HHA is asking the OASIS questions, (2) the right to have their personal health care information kept confidential, (3) the right to refuse to answer questions, (4) the right to look at, and request changes to, their personal assessments, and (5) the right to be informed that OASIS information will not be disclosed except for legitimate purposes allowed by the Privacy Act. HCFA has established additional methods to ensure the security of OASIS information while in transit and in storage. First, HCFA will retain information on individuals who have non-Medicare/Medicaid payment sources in a format that does not identify particular patients. For these patients, the HHA will submit OASIS information with certain patient identifiers “masked.” According to HCFA officials, masking involves obscuring items such as the patient’s name, Social Security number, and HHA patient identification number, while still allowing data for individual patients to be linked. These officials told us that they cannot decode masked identifiers or re-identify the information based on nonmasked identifiers, and therefore neither the state nor HCFA will know the identity of the non-Medicare/Medicaid patients who are the subjects of transmitted OASIS information. Second, HHAs are currently required to submit OASIS data through a private telephone line. HCFA officials told us that they required HHAs to transmit OASIS data via the Medicare Data Communications Network (MDCN) as of October 1, 2000. The MDCN system includes an encryption standard for increased protection from unauthorized access while the data are in transit. According to HCFA officials, the MDCN’s 128-bit encryption standard will guard against unauthorized access to OASIS data, such as by computer hackers, while in transit. HCFA and state OASIS automation coordinators also told us that the use of the MDCN network is subject to numerous password protections. In order to access the MDCN, a user needs to know three different items of information, all of which are subject to confidentiality policies: (1) the phone number of the MDCN network, (2) the individual user identification number and password for the MDCN, and (3) the HHA-specific user identification code and password for the applicable state system. In addition, the MDCN passwords must be changed on a periodic basis. Third, according to HCFA officials, the agency has implemented physical safeguards and record retention policies to reduce the risk of unauthorized access over time. For instance, the HCFA OASIS data computer server is kept in a secure room, and only personnel with designated access may enter. HCFA officials told us that although they do not inspect the privacy safeguards in place at the state level, guidelines issued to the state agencies require server safeguards. The state OASIS coordinators we spoke with said their server rooms are locked and access restricted. HCFA officials told us that for now, OASIS data will not be maintained online for more than 3 years. HCFA officials stated that they would also not maintain identifiable OASIS data any longer than 15 years.
With the Health Care Financing Administration's (HCFA) implementation of a prospective payment system, efforts to protect patients from potential underprovision of care and to hold home health agencies (HHA) accountable are essential. Instituting the collection and reporting of Outcome and Assessment Information Set (OASIS) data is an important step in that direction. The use of OASIS data enhances consistency in the performance and documentation of patient assessments for home health services. As a result, information on patient outcomes will become available for the first time. Collecting such data is not without its costs. To varying degrees, the requirement to collect OASIS data on all home health patients increases the amount of staff time devoted to collecting and reporting patient assessment information. HHAs have been compensated for some of these costs through adjustments made to their payment rates. Moreover, because prospective payment system episode payment rates are based on historically high utilization levels, which have since declined, these rates should allow the completion of OASIS assessments. Protecting the privacy of home health care patients is also important. HCFA has made progress in this area by enhancing protections in the collection and transmission of the OASIS data. The effectiveness of these policies and procedures will depend on how well they are implemented.
For about 30 years, TCMP has been IRS’ primary program for gathering comprehensive and reliable taxpayer compliance data. It is IRS’ only program to measure noncompliance on a random basis, allowing IRS to make statistically reliable estimates of compliance nationwide. IRS uses the data for measuring compliance levels, estimating the tax gap, identifying compliance issues, developing formulas for objectively selecting returns for audit, and allocating audit resources. Congress and federal and state agencies have used TCMP data for policy analysis, revenue estimating, and research. The 1994 TCMP survey is planned to be the most comprehensive TCMP effort ever undertaken. That is because IRS is undertaking four surveys at once to collect comparable information on businesses organized in different ways. Currently planned to include about 153,000 tax returns, this TCMP was designed to obtain compliance information for individuals (including sole proprietors); small corporations (i.e., those with assets of $10 million or less); S corporations; and partnerships. This TCMP was also designed to obtain information at the national level as well as for smaller geographical areas across the country. About 120,000 of the sample returns are to cover businesses; about 33,000 are to cover individuals. This is to be the first time that IRS will conduct a TCMP audit for all four types of taxpayers at the same time. The 1994 TCMP sample is stratified by market segments, as opposed to type of return, income amount, and asset size, which were the characteristics used to stratify samples in prior surveys. A market segment represents a group of taxpayers with similar characteristics, such as taxpayers engaged in manufacturing. IRS plans to stratify taxpayers into 23 business and 4 nonbusiness (individual) market segments. IRS will also have one market segment for foreign-controlled corporations. IRS believes that stratifying in this manner would allow it to more effectively use TCMP data for identifying noncompliance trends and selecting cases for audit. To assure comparability with previous TCMP surveys, the sample can also be stratified into the traditional groupings (i.e., type of return). As planned, in the 1994 TCMP, IRS would audit about 40,000 more returns than the aggregate for all entity types from the latest TCMP surveys conducted on these entities. IRS’ primary reasons for this increase are the use of market segments and ensuring statistical validity for IRS’ 31 District Office Research and Analysis sites, which are located throughout the country. IRS considers the 1994 TCMP effort to be particularly important because it would be the first comprehensive effort to validate its current market segment compliance strategy for identifying and correcting noncompliance, and also because existing compliance data are getting old.IRS expects to have completed audits on about 30 percent of the sample returns by October 1996 and to have final TCMP data in late 1998. IRS plans to collect data on the reasons for noncompliance and the specific tax issues associated with the noncompliance. IRS also plans to place greater emphasis on quality audits to ensure that accurate data are collected. Finally, in its TCMP training for auditors, IRS plans to emphasize the need to make effective use of internal data to reduce the amount of information requested from taxpayers, thus reducing the burden imposed on those taxpayers. Our objectives were to (1) determine how IRS addressed the problems discussed in our 1994 TCMP status report and, if the problems persist, how they will affect final TCMP results; (2) identify informational resources other than TCMP that IRS could use to target its audits more effectively; and (3) assess the value of TCMP data for alternative tax system proposals. To determine the actions IRS took on the concerns we raised in our 1994 report, we reviewed TCMP documents and discussed the actions taken with IRS officials responsible for designing and implementing the program. To determine whether other information sources could be used to replace TCMP, we relied on work we had done on TCMP and we discussed with IRS officials how IRS could use other potential data sources, including state and nongovernment sources. To determine the relevancy of TCMP data for new tax system proposals, we reviewed various published documents on these systems and compared them to the current income tax system. Our observations in this report are based in large part on the work we have done over the years on IRS’ compliance programs in general as well as specific work on TCMP. We issued a report in May 1994 on all such work. We did our work in September 1995 in accordance with generally accepted government auditing standards. On September 29, 1995, we obtained oral comments on a draft of this report from officials responsible for planning and implementing TCMP in IRS’ Compliance Research Division, including the National Director of Compliance Research. We have incorporated their comments where appropriate. Our 1994 TCMP report discussed concerns dealing with various aspects of IRS’ plans for the upcoming TCMP. Basically, these concerns centered on IRS being able to (1) meet major milestones for starting audits, (2) collect audit adjustment data on partners and S corporation shareholders, (3) collect data on potentially misclassified workers, (4) develop data collection systems, (5) make it easier for researchers to access TCMP audit workpapers, and (6) develop a TCMP research plan. In our 1994 report, we raised a concern about IRS’ ability to meet its October 1, 1995, milestone for starting the TCMP audits. Our concern was based on the amount of work that had to be done to design and test the TCMP data collection system, develop training material, train auditors, and produce case file information. In early September 1995, IRS postponed the start of its TCMP audits from October 1, 1995, to December 1, 1995. IRS attributed the delay to the uncertainties about its fiscal year 1996 budget. IRS does not expect the delay in starting the audits to affect the March 31, 1998, date for completing all 153,000 TCMP audits. The delay in the start of the audits could allow IRS to complete various TCMP database testing, which has not been completed as originally scheduled. For example, IRS has not completed all its tests of the consistency of reported business return data, which were scheduled to be completed by August 31, 1995. The tests are designed to identify and eliminate inconsistencies in the data and need to be completed before audit cases can be sent to the field. According to IRS officials, the tests associated with reported return data on individual taxpayers (i.e., Form 1040 information) have been completed and returns are ready to be sent to field offices for audit. IRS expects to complete all tests of the business portion (i.e., corporations and partnerships) of the database by November 30, 1995. We are concerned that if major modifications have to be made to the data, the December 1, 1995, date to start audits of business taxpayers could be delayed. In our 1994 report, we raised concern about whether the amount of information IRS would be collecting on partnerships and S corporations would be adequate to measure the compliance levels for these two entities. In response to the report, IRS officials said they would collect more data on partnerships and S corporations but would not collect data on partners and shareholders. IRS has since decided that it would capture data on partners and shareholders. This additional data could increase the value of TCMP data for determining tax impacts of partnership and S corporation audits and measuring the tax gap associated with these entities. In our 1994 report, we were concerned that IRS would not be collecting sufficient information on businesses that misclassified their workers as independent contractors instead of as employees. We were also concerned that IRS would not be gathering data on taxpayers who file returns as sole proprietors, but who potentially may be employees and not independent contractors. According to IRS officials, IRS will be capturing tax data on referrals made to employment tax specialists on classification cases. Also, IRS has developed a detailed employment tax data collection instrument to gather in-depth data on the results of those employment tax issues that are identified in the TCMP audits. In our 1994 report, we commented on IRS’ concurrent development of two data collection systems for use by auditors to directly enter their audit results onto computers. We were concerned that IRS had not made a decision on which of these two systems to use. Our concern related to the time IRS would need to test the selected system, develop training materials, and train auditors on how to use it. IRS stated that it needed a back-up system to the primary system, which is the Totally Integrated Examination System (TIES), as an insurance plan in case TIES proves less than satisfactory. TIES was being developed for use in IRS’ regular audit program and is being modified to meet TCMP specifications. According to IRS officials, complete system acceptability tests will be done on both data collection systems. IRS officials said that they expect the tests to be done by November 22, 1995, and that TIES will be available for use by the time audits are scheduled to start. If major modifications need to be made to the systems as a result of the tests, we are concerned that IRS may not meet its December 1, 1995, revised milestone for starting audits. In our 1994 report, we suggested that IRS find ways to make TCMP audit workpapers available through electronic media so that the workpapers would be readily available for compliance research. In commenting on our report, IRS agreed to explore the feasibility of retaining the computer disks for those cases where the workpapers are generated by computer. IRS officials subsequently informed us that it is not technically feasible to automate all audit workpapers. However, IRS has included a 100-line comment section in the TCMP data collection systems and in the TCMP database to capture clarifying information on complicated cases, which could provide researchers with some of the additional information found in the audit workpapers. Adding the comment section to the TCMP database could enhance the overall value of the TCMP data and may be a good substitute, in some cases, for the audit workpapers. Therefore, it is important that auditors be instructed in the types of information to include in the comment section. The automated comment feature also provides IRS with an opportunity to collect information on issues that cannot be analyzed using the data elements currently planned to be on the TCMP database. For example, one criticism of TCMP audits has been that the audits are burdensome or overly intrusive for taxpayers. IRS could use the automated comment section to gather information on taxpayer burden, such as the time taxpayers estimate they spent preparing for the audit and the types of documents auditors had to get from taxpayers in order to verify tax return data. In our 1994 report, we pointed out that IRS did not have a research plan that defines the research questions and the data to be collected that would answer the questions. IRS still does not have a research plan. In response to our 1994 report, IRS officials stated that from past TCMP surveys they know what elements are needed to do compliance estimation, measure the tax gap, and develop return selection formulas. They said that since virtually all the data from sampled returns are collected, IRS will have appropriate and comprehensive information to meet its research needs. While the lack of a research plan may not directly lessen the value of final TCMP results, such a plan could put IRS in a better position to quickly analyze final TCMP data. One criticism of prior TCMP surveys has been that useable TCMP data were not produced in a timely fashion. To help formulate research questions, IRS could analyze preliminary TCMP results. For example, IRS expects to complete about 46,000 TCMP audits by the end of fiscal year 1996, which could be enough cases to formulate research questions. IRS is reluctant to use preliminary unweighted or partially weighted TCMP data because the data are not statistically valid. Even though preliminary data may not be statistically valid, these data could provide early information on possible noncompliance trends and other problems, such as complexity issues, which could be useful to both IRS and Congress when they are examining potential modifications to the tax system. IRS uses TCMP data to develop objective, mathematical formulas, which it uses to score returns for audit selection. As a result, IRS can make more efficient use of its audit resources and avoid unnecessarily burdening compliant taxpayers. For example, in 1969, the year before IRS started using this scoring system, about 46 percent of IRS’ audits resulted in no change to an individual’s tax liability. By using TCMP-based formulas, IRS has been able to more accurately select tax returns requiring changes, thus reducing the no-change rate to less than 15 percent in 1994. We are not aware of any other available data that can be used to develop return selection formulas that would allow IRS to target its audits as effectively as TCMP data. IRS is attempting to develop an Automated Issue Identification System that has the potential of selecting returns that should be audited. The system is being tested on individual tax returns in two IRS locations, and, according to IRS officials, the preliminary results are promising. However, this system would be dependent in part on the TCMP-developed return selection formula to identify the returns that should be audited. Also, this system would require that almost all tax return data be transcribed onto computers similar to the amount of data transcribed from returns that are selected for TCMP audits. IRS does not expect to have the technological capability to have all return information on computer until after the turn of the century. There are third-party databases that potentially could be used to supplement the compliance data that IRS obtains from its TCMP surveys. However, these databases cannot be used to develop return selection formulas because they either contain just aggregate data on businesses and individuals or have information just on specific tax issues. For example, Bureau of Labor Statistics and U.S. Census data can be used to make aggregate profiles of the population based on various income characteristics, such as average household earnings. Some states have databases that IRS could use to supplement its audit and other compliance activities, such as state sales tax data. Commercial sources for information on industry norms are also available to supplement IRS compliance activities. IRS currently uses these data sources in some of its compliance research projects. There are a number of proposals to change the current tax system. The proposals are as follows: A Flat Tax would levy a single-rate wage tax on individuals and a single-rate cash-flow tax on businesses. An Unlimited Savings Allowance (USA) Tax would provide for a three-bracket individual income tax, with a full deduction for income saved rather than consumed. On the business side, a single rate would apply to income from both corporate and noncorporate businesses, with an immediate deduction for capital investment and purchases of inventory. A Simplified Income Tax would broaden the tax base, lower the tax rate, and eliminate most current deductions and credits. A Value Added Tax (VAT), a consumption tax, would be collected at each stage of the production process. A Retail Sales Tax, a consumption tax, would be collected at the retail level in the form of a sales tax. To determine the relevancy of TCMP data to these alternative tax systems, we analyzed the tax return elements IRS plans to examine in its tax year 1994 TCMP and published documents on the systems. (See app. I for the results of this analysis.) In doing our analysis, we did not consider the TCMP costs and benefits or taxpayer burden for each of the proposals. Generally, we found that TCMP data could have some relevancy for each alternative tax system. The degree of relevancy depended on the number of current tax elements that would be retained under an alternative tax system—the more elements that are retained, the more relevant the TCMP data would be. Potentially, data obtained from TCMP audits could be used to guide both the final design and administration of a new tax system. While complete 1994 TCMP data would not be available until late in 1998, data on about 46,000 sample cases should be available by the end of fiscal year 1996. As questions arise during the process of drafting new tax laws, data from some of the 46,000 cases, while not statistically valid at the district level, may indicate obvious trends in nationwide data that could be used in making decisions on changes to tax law. With respect to the administration of tax laws, each of the current proposals would require that tax administrators implement some form of compliance strategy. Any such strategy would likely be dependent on compliance data. The 1994 TCMP should be able to provide much of the information necessary for implementing such a compliance strategy. For example, any new tax system would likely continue to rely on audits to ensure compliance. Accordingly, auditors would continue to need compliance information on business income and expenses and, for some of the proposals, compliance information on individual income and deductions. For the most part, this data could be provided by the 1994 TCMP. Some TCMP data would be useful in the design of all the proposed tax systems. For example, gross receipts, a key area of noncompliance in past TCMP audits, would be important in each of the new tax proposals. For this potential problem area, TCMP should show the compliance levels, provide specific tax issues associated with the identified noncompliance, and provide reasons for the noncompliance. The compliance data should help Congress to determine the potential extent of noncompliance that could be expected under the new tax system proposals. This would be important in setting tax rates. Similarly, data on the reasons why the noncompliance occurred and the specific tax issue involved could provide clues to legislative actions that may be needed to help prevent noncompliance under the new system or to help tax administrators identify noncompliant taxpayers more readily. Knowing these weak spots would be useful so that Congress could attempt to overcome these problems as it considers designs for new tax systems. To the extent that the proposals for tax reform retain elements of the current system, such as properly determining business receipts and expenses, TCMP data could play a prominent role in helping to evaluate and design those parts of the proposed new tax system. To the extent that a new tax system is adopted that differs radically from the existing system, TCMP data would still be useful. For example, TCMP information on gross receipts of retail businesses would be useful for designing and administering a retail sales tax system. Under this system, information on all business income and expenses could be relevant for profiling those retailers who would be more likely to underreport their gross receipts. Also, if a federal retail sales tax included consumer services not now covered by state sales taxes, TCMP could be the only source of information on underreporting of gross receipts by the sellers of these services. It must be recognized that the results from TCMP would reflect noncompliance under the income tax law and the administrative practices in place today. Incentives or opportunities to evade tax on certain transactions may increase or decrease under a new system. For example, if a business taxpayer fails to report a sale of an asset under the current income tax, the business might avoid paying tax on a capital gain, a fraction of the selling price. Under many consumption tax proposals, all the proceeds of an asset sale are taxable, but at a lower rate. The incentive to not report the sale may increase or decrease relative to the current system, depending on the circumstances. In addition, opportunities to not comply in some areas may change significantly depending on whether administrative tools such as withholding and information reporting were included as part of the system. In order to effectively select returns for audit under a new tax system, tax administrators may be able to use 1994 TCMP results in combination with information on the relative incentives and opportunities to avoid tax under that system, until direct measures of noncompliance under the new system became available. The preceding discussion dealt only with the usefulness of TCMP results for administering each of the proposed tax systems once the new system had been fully implemented. The results of the planned TCMP would also be of use in administering the current income taxes in the interim period before a new system would be completely phased in and the old system completely phased out. The 1994 TCMP data would become increasingly important if it proves impossible to fully implement a new tax system until after the turn of the century. This is because IRS would need to continue to audit returns under the current tax law, and existing return selection criteria are based on past TCMP survey data, which are growing older every year. The usefulness of the forthcoming TCMP during the interim would depend on the effective date of the replacement system and on the extent to which the enacting legislation would include transition provisions. The 1994 TCMP data used to develop audit selection formulas for the current tax system are not scheduled to be developed before late 1998. However, if a new tax system became effective before that time and had few transition provisions, IRS could still use interim TCMP data on noncompliance issues to direct audits of tax returns filed under current rules. If tax reform legislation were to take longer to pass, if the legislation provided for a significant period of time between the date of enactment and the effective date of the new system, or if the legislation contained numerous transition provisions, then the value of the planned TCMP would be greater. Items such as unused tax credits and deductions for depreciation, depletion, and net operating losses might be subject to transition rules. For example, it has been suggested that if a flat tax were enacted, businesses might be allowed to claim depreciation deductions during a transition period of several years for assets they purchased under the old system. Others have suggested that taxpayers could be subject to both the current income tax and a new consumption tax for a period of years, with the income tax rate declining as the consumption tax rate increases. If the planned TCMP were cancelled and the current income taxes were not completely phased out before the next century, then IRS would be compelled to select income tax returns for audit on the basis of compliance information that was over 10 years old. Administrators of the new tax system also would have only this same dated compliance information to guide their enforcement efforts for several years before data from any future TCMP became available. IRS has taken action on most of the concerns we raised in our December 1994 report. The delay in starting the TCMP audits because of budgetary concerns is fortuitous because IRS had not completed testing all the tax return database or data collection systems for the TCMP. These tests have to be completed before audits can start. If the tests show that major modifications have to be made to the database or data collection systems, then IRS may not meet its December 1, 1995, revised date for starting audits. There is still time for IRS to develop a research plan so that it could analyze final TCMP results more quickly. IRS could begin now to formulate research questions and could also use preliminary TCMP data as they become available to develop other questions. It is important that there are no further delays because the existing TCMP data are old, and, to our knowledge, there are no other data sources that IRS could use to develop formulas for selecting returns for audit. IRS is attempting to develop a system that could be used for selecting returns, but this system would not be operational until after the turn of the century. TCMP data could also be of value for helping with the design and administration of alternative tax systems. The value of the data would depend on how much of the current tax system would be retained under the new system. On September 29, 1995, we discussed a draft of this report with IRS Compliance Research Division representatives, including the National Director of Compliance Research. They generally agreed with our assessment of the actions taken on the concerns we raised in our 1994 report, the availability of other information sources to replace TCMP, and the relevancy of TCMP data for new tax system proposals. Copies of this report are being sent to various interested congressional committees, the Director of the Office of Management and Budget, the Secretary of the Treasury, the Commissioner of Internal Revenue, and other interested parties. It will also be made available to others upon request. Major contributors to this report are listed in appendix II. Please contact me on (202) 512-9044 if you or your staff have any questions about the report. This appendix discusses some ways the individual and business segments of the 1994 TCMP could be used to evaluate the design and administration of the five alternative tax system proposals. The proposals are described below. A Flat Tax would levy a single-rate wage tax on individuals and a single-rate cash-flow tax on businesses. An Unlimited Savings Allowance (USA) Tax would provide for a three-bracket individual income tax, with a full deduction for income saved rather than consumed. On the business side, a single rate would apply to income from both corporate and non-corporate businesses, with an immediate deduction for capital investment and purchases of inventory. A Simplified Income Tax would broaden the tax base, lower the tax rate, and eliminate most current deductions and credits. A Value Added Tax (VAT), a consumption tax, would be collected at each stage of the production process. A Retail Sales Tax, a consumption tax, would be collected at the retail level in the form of a sales tax. Under the VAT and Retail Sales Tax proposals, individuals would bear taxes as they consume goods and services, but, unless they were sole proprietors, they would not file tax returns. Therefore, the nonbusiness (e.g., individuals who are not sole proprietors) portion of the TCMP (about 18 percent of the sample) has no relevancy for these two types of taxes. The flat tax, the USA tax, and the simplified income tax would require returns for individual taxpayers who do not own businesses. Thus, TCMP data should have some relevancy in evaluating these alternative tax systems. Table I.1 shows the TCMP data elements for individuals that could be relevant to policymakers and tax administrators in developing and administering the flat tax, USA tax, and simplified tax systems. The TCMP data elements are essentially the same as the line items found on the Form 1040, Individual Income Tax Return. The “yes” in the columns in table I.1 indicates that the TCMP element would be relevant for evaluating the proposals. Those columns in table I.1 that do not contain “yes” indicate that TCMP data collected for these elements would not be relevant for that particular tax system. The sample sizes shown in table I.1 are the number of individual returns IRS plans to audit in the TCMP. Wages, salaries, and tips Taxable refunds or credits of state and local income taxes Capital gain or (loss) Income from rental real estate, partnerships, S corporationsIRA deductions for self and spouse One-half of self employment taxKeogh retirement plan and SEP deductionPenalty on early withdrawal of savings State and local income taxes (continued) Child and dependent care expenses Credit for the elderly or disabled Social Security tax on tip income Tax on qualified retirement plans (Table notes on next page) As indicated in table I.1, some of the TCMP individual tax elements should be relevant for evaluating the flat tax proposal. However, there would be no need to evaluate the compliance associated with investment-type income or deductions, such as charitable contributions or state and local taxes, because these elements are not part of the proposal. The relevant individual tax elements are filing status, exemptions, wages and salaries, pension income, and unemployment compensation. TCMP data could be used to determine how accurately taxpayers have reported these elements and the reasons why taxpayers have failed to comply under the current tax system. For example, the TCMP data may indicate that even under a flat tax, current requirements are too complex for many taxpayers to determine their proper filing status. The TCMP data could provide information on ways current law could be simplified to reduce complexity and improve compliance. As indicated in table I.1, almost all TCMP income elements for individuals would be included in the USA tax system and, thus, should be useful for evaluating this system. Under the USA system, taxpayers would be allowed unlimited deductions for net increases to savings; however, except for IRA deductions, taxpayers are not currently required to report these data. Therefore, TCMP data would not be useful for determining whether taxpayers would accurately report all investment deposits. On the other hand, TCMP data should be useful for determining the reporting accuracy of investment proceeds. Under the USA system, all deductions under the current income tax, except for mortgage interest and charitable contributions, would be eliminated. TCMP data should be useful in developing compliance statistics and programs for these two items. However, TCMP could not be used to evaluate the postsecondary education deduction allowed under the USA proposal. Similarly, TCMP data could not be used to evaluate the fringe benefits that would be taxable under the USA proposal because these benefits, such as employer paid medical insurance, are currently not taxable and would not be studied in the TCMP survey of individuals. However, data on fringe benefits would be gathered on the business portion of the TCMP. As indicated in table I.1, almost all of the TCMP elements should be relevant for evaluating compliance with income reporting requirements. On the deduction side, only TCMP data on mortgage interest would be relevant for evaluating this system. Like the USA tax system, fringe benefits would be taxed; thus, the individual portion of the TCMP would not be useful for evaluating this type of income. All five alternative tax systems cover businesses, which include sole proprietorships, corporations, S corporations, and partnerships. About 82 percent of the TCMP sample covers businesses. Table I.2 indicates the TCMP data elements that should be useful for developing and administrating the flat tax, USA, VAT, and retail sales tax systems. Table I.2 does not contain information on the simplified income tax system because we were not able to obtain any information on the business portion of this tax system. However, on the basis of information available on the individual portion, it would appear that almost all business income and deduction items in the current system would be relevant under the simplified income tax system. Income and cost of goods sold Net gain or (loss) from sale of business property(continued) Relevant TCMP data by proposal Information gathered on businesses includes small corporations, S corporations, partnerships, and sole proprietorships. Under the flat tax proposal, businesses would be assessed a tax on gross receipts less the costs of providing the goods or services. Therefore, as indicated in table I.2, almost all of the TCMP tax elements dealing with business gross receipts and deductions should be relevant for administering a flat tax, such as designing compliance strategies, identifying returns for audit, and estimating the tax gap. If this proposal were implemented, TCMP data on business investment income and interest expenses would not be relevant. As indicated in table I.2, many of the income and deduction items currently reported on business returns would still be reported on returns under the USA tax proposal. Thus, the TCMP data would be relevant for developing compliance programs, selecting returns for audit, and estimating the tax gap. Items that would not be relevant include investment type income (e.g., interest and dividends); and deductions for wages and salaries, interest payments, and contributions to employee pension programs. As indicated in table I.2, if a VAT were adopted as a replacement for the existing income tax, TCMP data on business gross receipts and purchases would be relevant for looking at potential compliance problems with VAT reporting. Thus, TCMP information would continue to be useful in developing compliance programs, selecting returns for audit, and estimating the tax gap. As indicated in table I.2, return information on gross receipts should be relevant for evaluating compliance problems under a retail sales tax system. A retail sales tax would generally apply only to businesses in the retail trade market segments. This group comprises about 24 percent of the planned 1994 TCMP sample. Lou Roberts, Evaluator-in-Charge The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided information on the Internal Revenue Service's (IRS) Taxpayer Compliance Measurement Program (TCMP) for tax year 1994, focusing on: (1) how IRS addressed the problems identified in a previous GAO report; (2) how persistent problems affect final TCMP results; (3) other informational sources that IRS could use to target its audits more effectively; and (4) the relevancy of TCMP data for alternative tax system proposals. GAO found that: (1) IRS has taken appropriate actions to correct the previously identified problems in the implementation of TCMP; (2) due to uncertainties about its fiscal year 1996 budget, IRS has delayed TCMP audits until December 1, 1995; (3) the audit delay will allow IRS to complete testing of TCMP database components and data collections systems; (4) the audits could be further delayed if the tests reveal additional problems; (5) IRS plans to collect data on partners, shareholders, and misclassified workers which should allow it to better measure compliance levels and TCMP audit results; (6) computerized auditor comments should make it easier for researchers to analyze TCMP results and allow IRS to collect data on other tax issues that are not a part of TCMP; (7) IRS still needs to develop a research plan that would allow it to more timely analyze TCMP data; (8) no alternative information sources exist that could help IRS better target its audits; (9) IRS is developing a new identification system for tax return audits, but it will not be available until after year 2000; and (10) TCMP could be useful in designing and administering a new tax system and identifying compliance trends.
The federal government uses grants to achieve national priorities through nonfederal parties, including state and local governments, educational institutions, and nonprofit organizations. Grant programs are established through legislation and vary in numerous ways including type, size, nature of recipients, and types of programs they fund. While there is significant variation among different grant program goals and grant types, most federal grants follow a common life cycle comprising four stages for administering the grants: (1) pre-award stage; (2) award stage; (3) implementation; and (4) closeout (see figure 1). During the award stage, the federal awarding agency enters into an agreement with the grantee stipulating the terms and conditions for the use of grant funds including the period funds are available for the grantee’s use. The awarding agency also opens accounts in a federal payment management system through which the grantee receives payments. During the implementation stage, the grantee carries out the requirements of the agreement and requests payments, while the awarding agency approves payments and oversees the grantee. The grantee and the awarding agency close the grant once the grantee has completed all the work associated with a grant agreement, the grant period of performance end date (or grant expiration date) has arrived, or both. The closeout stage includes preparation of final reports, financial reconciliation, and any required accounting for property. Closeout procedures ensure that the grantee has met all financial requirements, provided all final reports, and returned any unspent balances. All stages of the grant life cycle, including grant closeout, are subject to a wide range of requirements derived from a combination of OMB guidance, agency regulations, agency policy, and program-specific statutes. OMB is responsible for developing government-wide policies to ensure that grants are managed properly and the grant funds are spent in accordance with applicable laws and regulations. For decades, OMB has published circulars to aid grant-making agencies on various subjects, including administration, audit, record keeping, and allowability of costs. In December 2013, OMB consolidated its grants management circulars into a single document, Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards, (Uniform Guidance) to streamline its guidance, promote consistency among grantees, and reduce administrative burden on nonfederal entities. The Uniform Guidance includes revised rules that set the standard requirements for financial management of federal awards across the federal government. In December 2014, OMB, along with federal grant- making agencies, issued a joint interim final rule implementing OMB’s Uniform Guidance for new grant awards made on or after December 26, 2014. Federal agencies that award and administer grants and agreements are responsible for issuing regulations—with which grantees must comply—that are consistent with the Uniform Guidance, unless different provisions are required by federal statute or are approved by OMB. Agency regulations issued under the Uniform Guidance typically impose closeout procedures upon both the grantee and the awarding agency. To close an awarded grant, the grantee and the awarding agency must complete certain actions within the time frames specified by agency regulations and the Uniform Guidance (see figure 2). Generally, within 90 calendar days from the period of performance end date, the grantee must submit all financial, performance, and other reports as required by the terms and conditions of the award and liquidate all permissible expenses incurred under the award. Grantees then are to promptly refund any remaining balances to the awarding agency and account for any real and personal property acquired with federal funds or received from the federal government. The awarding agency must make prompt payments to the grantee for allowable reimbursable costs under the award being closed out. If required by the terms and conditions of the award, the awarding agency must also make a settlement for any upward or downward adjustments to the federal share of costs after the closeout reports are received. Upon the receipt and acceptance of all required final reports from the grantee, the awarding agency should complete all closeout actions within 1 year. Some federal agencies’ grant policies require less than the 1 year closeout period allowed by the Uniform Guidance. For example, HHS’s grant policies specify that all closeout actions must be completed within 180 days of the receipt and acceptance of all required grantee final reports, allowing for a maximum of 270 days for the total closeout process. Closing out grants could allow an agency to redirect resources toward other projects and activities or return unspent funds to the Treasury (see figure 3). Generally, if the undisbursed balances that are deobligated from closed grant accounts are still available for incurring new obligations, the agency may use the funds to enter into new grant agreements. This may allow the federal agencies to use existing resources to fund new grant projects. If the undisbursed balances are returned to expired appropriation accounts, the agency may not use the deobligated funds to make new grants. However, the agency may use the deobligated funds to make adjustments to, or liquidate existing, obligations that were incurred before the appropriations account expired. Expired appropriations accounts remain available for 5 years to make adjustments, after which the undisbursed balances are canceled and returned to the Treasury. At this point, these funds are no longer available for agency use. This helps ensure that federal agency resources are properly spent and helps agencies maintain accurate accounting of their budgetary resources. It may also reduce future federal outlays relative to the federal government’s original estimated amount of spending for these programs. One way agencies can track whether grants are closed out and undisbursed grant balances are deobligated in a timely manner is through federal payment systems. Some agencies make grant payments directly to grantees using their own payment systems, while others enter into arrangements with payment systems that serve multiple agencies to make payments on their behalf. Payment systems represent an important control point for managing federal grant funds. After a grant agreement reaches the period of performance end date, an agency may need to close out a grant in multiple systems including in its payment system, the agency’s general ledger, or in a separate grant management system. Failure to close out a grant in the payment system could prevent the timely deobligation of unspent funds and limit agencies’ ability to regain budget authority that could be used for other purposes. The largest civilian federal payment system is the Payment Management System (PMS). Operated by HHS’s Program Support Center (PSC), PMS allows grantees and awarding agencies to manage all payment-related activities, including grant payment requests, drawing down federal funds from preauthorized grant accounts, and disbursement reporting. Grantees request payments from PMS, which then transmits authorized payments to either the Federal Reserve Bank or Treasury for deposit into the grantee’s bank account. Each PMS account represents a different grant agreement with a specified period of performance end date. To help federal grant-making agencies identify and close out grant accounts in a timely manner, PSC makes available a quarterly “closeout” report. The closeout report lists expired grant accounts that meet the following conditions: (1) remain open more than 3 months past the grant period of performance end date and (2) not having any disbursements in the preceding 9 months. Appropriations acts for selected agencies have called attention to the issue of undisbursed balances in expired grant accounts. For example, Section 530 of the Commerce, Justice, Science, and Related Agencies Appropriations Act of 2016 required the Director of OMB to instruct any affected agencies receiving funds under that act to track undisbursed balances in expired grant accounts and report on their efforts to address the issue in their annual performance plans and performance and accountability reports. Further, Section 530 requires OMB to instruct the affected agencies to report on their methods to track undisbursed amounts in expired grants and future actions to resolve the issue, the amount of the undisbursed balances that may be returned to the U.S. Treasury, the total number of expired accounts with undisbursed balances in the preceding 3 fiscal years, and the total finances remaining in expired accounts that have not been obligated to a specific project. On January 28, 2016, the Grants Oversight and New Efficiency Act (GONE Act) was signed into law. This law requires that not later than 180 days after enactment the Director of OMB instruct the head of each agency, in coordination with the Secretary of HHS, to provide a report to Congress and the Secretary on grants for which the grant’s period of performance has been expired for more than 2 years, including those with undisbursed balances and with zero dollar balances remaining in the accounts. The report, to be delivered by December 31, 2017, is also to describe the challenges leading to delays in grant closeout and explain why each of the 30 oldest grant awards has not been closed out. It is expected that this identification will help lead to a reduction in the number of expired grants that have not been properly closed out from the financial payment systems, improve financial accountability over grant programs, and save taxpayer money on costs associated with maintaining the grants in open status. As of September 30, 2015, $993.5 million in undisbursed balances remained in 8,832 expired grant accounts in PMS (see figure 4). The total undisbursed balances comprised about 0.2 percent of the $479 billion disbursed through PMS in fiscal year 2015, unchanged from the percentage of the $415 billion disbursed through PMS in fiscal year 2011. The undisbursed balances comprised about 3 percent of the amount authorized for these expired grant accounts, similar to the percentage we found in 2011. Among the departments and agencies that are customers of PMS, the undisbursed balances as a percentage of the authorized amounts available for these expired grant accounts ranged from less than 1 percent to 31 percent. These expired grant accounts should be considered for closeout based on OMB’s Uniform Guidance, which specifies that within 90 calendar days from the period of performance end date, grantees must submit all financial, performance, and other reports as required by the terms and conditions of the award. Grantees must also liquidate all permissible expenses incurred under the award. Moreover, within 1 year of receiving the final reports, awarding agencies should close out grant accounts that are past their period of performance end date (expired grant accounts). As figure 4 shows, in 2015, the undisbursed balances increased to $993.5 million while the number of expired grant accounts decreased to 8,832, in comparison with 2011. The distribution of expired grant accounts and their associated undisbursed balances categorized by the number of years the accounts exceed their grant expiration dates were roughly similar for 2015 and 2011. More than half of the accounts exceeded their expiration date by 1 to 3 years. However as of September 30, 2015, we found expired grant accounts exceeding their grant expiration date by at least 10 years almost doubled to 223 accounts with undisbursed balances increasing fourfold to $39.1 million. Given the general 3 year federal record retention period, there is an increased risk that these grant accounts expired for 10 years or more may not have the necessary financial documents and other information available for account reconciliation. As of September 30, 2015, 151 expired grant accounts with undisbursed balances of $1 million or more comprised approximately 2 percent of the 8,832 expired grant accounts (see figure 5). However, grants in this category had a total of $514.7 million in undisbursed balances that made up 52 percent of the $993.5 million in PMS undisbursed balances, an increase from $316 million or about 40 percent in 2011. This indicates that targeting efforts to close out a small number of expired grant accounts with high undisbursed balances could reduce the total undisbursed balance significantly. It is worth noting that many of the expired grant accounts did not have large undisbursed balances—about half of the expired grant accounts had undisbursed balances of less than $10,000. As of September 30, 2015, $651.3 million in undisbursed balances in expired HHS grant accounts made up 66 percent of the total $993.5 million in undisbursed balances in PMS (see figure 6). This is down from HHS’s 75 percent of the total $794 million in undisbursed balances in 2011. HHS had 7,158 expired grant accounts (81 percent of all expired PMS grant accounts in the data we analyzed), a decrease from 8,262 accounts in 2011. These accounts comprised the bulk of the expired grant accounts among the 11 federal agencies and departments that are PMS customers. Similar to all PMS expired grant accounts and undisbursed balances, about half of expired HHS grant accounts and undisbursed balances exceeded their expiration dates by 1 to 3 years. In 2015, expired HHS grant accounts exceeding their expiration dates by at least 10 years increased to $16.3 million in undisbursed balances and 129 accounts. HHS’s undisbursed balance as a percentage of the authorized amount available for these expired grant accounts, increased to 3.3 percent, up from 2.7 percent in 2011. These expired grant accounts far exceeded HHS’s 270-day window for closing out grants after the grant expiration date as outlined in HHS’s Grant Policy Administration Manual. Of the 11 HHS sub-organizations with grant accounts in PMS, the Administration for Children and Families (ACF) continued to make up a large proportion of the expired HHS grant accounts and undisbursed balances in 2015 (see appendix II for HHS sub-organizations). However, in comparison to 2011, ACF’s proportion of the total expired HHS grant accounts decreased. ACF’s $324.4 million in undisbursed balances, up slightly from $321.7 million in 2011, made up about half of HHS’s undisbursed balances in 2015, compared to 54 percent in 2011 (see figure 7). In 2015, ACF’s 2,395 expired grant accounts decreased to about one-third of HHS’s total, as compared to the 53 percent in 2011. Despite the drop in the number of expired grant accounts, a significant number of expired accounts remain open. HHS and ACF grant managers have failed to appropriately monitor expired grant accounts and adhere to HHS closeout policies. As of September 30, 2015, PMS data identified 5,906 expired grant accounts which had no undisbursed balance remaining as ready for closeout processing pending instructions from the awarding agency to finalize the process. The percentage of expired grant accounts ready for immediate closeout varied across all PMS customers. According to PMS officials, the 5,906 accounts incurred about $29,000 in fees for the month of September 2015, a significant reduction from the 28,000 accounts with fees totaling about $173,000 that we reported for 2011. In 2015, 3,922 expired HHS grant accounts made up 66 percent of the 5,906 grants PMS flagged for closeout, down from 79 percent (more than 21,000) in 2011. Promptly closing out expired grants in the PMS system would minimize the monthly service fees charged to the agencies. As HHS’s Program Support Center (PSC) does not close out grant accounts until instructed to do so by the awarding agency, expired accounts flagged by PMS for closeout continue to incur fees until they are closed. PMS fees are set to recover operating costs. PSC uses two billing rates for federal grant- making agencies—one rate generally applies to grants awarded to state, local, and tribal governments and the second rate generally applies to grants awarded to nonprofit agencies, hospitals, and universities. Agencies are billed monthly for all open accounts making it difficult to disaggregate the fees attributable to different types of grantees and those attributable to expired accounts with (1) undisbursed balances, (2) no remaining undisbursed balance (not flagged for closeout), and (3) no remaining undisbursed balances that have been flagged for closeout. The September 2015 fee of $29,000 for the 5,906 grant accounts PMS has flagged for closeout is small relative to the grant award amounts. But, it is important to note that it represents only a portion of the total fees customers pay to keep open expired grant accounts that should have been closed out by the awarding agency. In particular, the estimate does not include expired accounts which should have been closed and have undisbursed balances nor does it include those with zero balances that have not been flagged for closeout by PMS. However, without examining each expired grant account individually, we cannot determine whether expired grant accounts that have exceeded their expiration date should be closed or not. For example, the awarding agency may be renewing the grant or there could be pending action as a result of an audit. The monthly charges for expired grant accounts which have been flagged by PMS for closeout can accumulate over time and can be considerable. For example, of the 5,906 accounts PMS flagged for closeout in September 2015, 359 of these accounts had also been flagged for closeout in September 2011. Of these 359 expired grant accounts, 260 belong to HHS. If the grant has otherwise been administratively and financially closed out, then agencies paying fees for expired accounts with a zero dollar balance are paying for services that are not needed. The presence of expired grant accounts with no undisbursed funds remaining also raises concerns that administrative and financial closeout—the final point of accountability for these grants, which includes important tasks such as the submission of financial and performance reports—may not have been completed. Grant closeout is a necessary step for all types of grant programs that serve a variety of missions and provide different services. As the final stage of the grant life cycle, grant closeout requires the submission and approval of all final grantee reports. Agency officials told us that grant closeout delays can occur for a number of reasons that may be placed into several larger categories, and sometimes these reasons can be attributed to more than one category: grantee failure to submit final reporting in a timely manner; agency failure to review, process, and reconcile final reporting in a timely manner; and external processes or factors. Officials we spoke with at Commerce, Justice, NASA, and NSF— agencies that reported on undisbursed balances in expired grant accounts in their 2010 and 2011 annual performance reports—as well as ACF which accounted for the largest share of undisbursed expired grant balances at HHS as of September 30, 2015, told us that the grantee and awarding agency face the following challenges that may delay grant closeout (see figure 8). Federal grant awards may fund projects that generate research products or technical deliverables. Untimely submission of these deliverables on the part of the grantee can delay grant closeout. For example, certain grant awards at NASA require specialized reporting if the work funded by the grant is for developmental research or leads to new research accomplishments. According to NASA officials, grant award closeout can be delayed if the grantee has yet to submit required new technology reports, summary of research reports, or property reports. Justice officials also told us missing technical deliverables would prevent the National Institute of Justice (NIJ) from closing out grants in a timely manner. Justice officials said NIJ grants typically require a technical deliverable and they cannot close the grant until it has received the deliverable and deemed it acceptable. There are instances in which a recipient of a federal award may cease to exist. Grantees that cease operations after receiving a grant award may not be able to submit their final financial or performance reports to the federal awarding agency. For example, Commerce officials said if a grantee is late in submitting reports, further investigation may find that the grantee is no longer operational or is bankrupt. Commerce officials noted that, while grant awards involving grantee bankruptcies are uncommon, they could present challenges for grant closeout. Some grant programs may have cost-sharing or matching funds requirements in the award that are designed to supplement the resources available to the project from grant funds and to foster the dedication of state, local, and community resources to the purposes of the project. In addition, some federally funded grants support programs that earn program income or income generated from activities funded by the grant. Grant awards affected by these types of arrangements may require additional layers of review by the grantee and agency and delay grant closeout. For example, Commerce officials said grant programs with cost-sharing can experience closeout challenges if the grantee inaccurately documents the agreed upon shared costs and fails to meet the matching costs. Commerce officials said in cases where the grantee is short on the matching costs, Commerce’s bureaus would be required to investigate the issue and in some cases require that grantees return excess funds, which can delay grant closeout. Grant-making agencies may use grants management systems to track their grants and payment systems, sometimes operated by a different federal agency, to make payments to their grantees. As we have previously reported, the separation of grant management and payment functions in different systems could make it possible for an agency to close a grant in a grants management system but not close the grant in a separate payment system. In addition, it is possible for a grant expiration date to be extended in the grant management system and not in the payment system. ACF officials said the separation between grants management systems and payment systems can present challenges when awarding agencies attempt to reconcile final reports and close out grants in the two separate systems. According to ACF officials, grant closeout can become a manual process that presents challenges for agencies that may have a limited number of grants specialists. Specifically, officials explained that this process requires ACF grant managers to compare their internal grant reports with the PMS closeout report, identify grant awards to be closed, and request to close all eligible awards in PMS. ACF officials told us that, after completing this step, they have noted instances where grants they previously requested be closed continue to appear on subsequent PMS closeout reports—meaning that the grants for which they requested closure were not closed in the payment system. According to these officials, it could be possible for ACF to close a grant award in its grants management system while the grant remains open in the payment system due to a difference in the payments reported to the awarding agency and PMS. As a result, the disconnection between these systems can lead to undisbursed funds remaining in the payment system when the grant is closed in the agencies’ grants management system. In these cases, ACF grant managers may need to deobligate funds from the grant and this requires taking additional steps to reopen the grant in the grant management system. The disconnection between these two systems and, in some cases, the manual steps needed to reconcile the differences between systems can lead to additional administrative burden and resource use to close out grants. Recipients of federal grant awards may be subject to certain federal oversight requirements including having audits conducted. Under a federal audit, the grantee must prepare appropriate financial statements, follow up and take corrective action on audit findings, and provide the auditor access to records, as needed. Agency officials told us grant audits, and the related work, can delay grant closeout. For example, Justice officials noted that open inspector general (IG) audits and investigations, agency financial and programmatic monitoring activities, or open legal or compliance issues can delay grant closeout. Commerce officials also told us grants undergoing audit present closeout challenges. They also said that grants under audit resolution remain open until the audit is complete and a final decision has been issued. Certain types of grants may have audits that take years to complete. For example, NASA officials told us they have faced challenges in closing out grants to for- profit entities that require incurred cost audits by the Defense Contract Audit Agency (DCAA). NASA officials said these audits for commercial grants may take up to 3 years to complete as the federal audit agency faces an ongoing audit backlog. Some grant programs fund inherently complex projects. These projects may involve multiple levels of review to comply with state statutes which can delay grant closeout. For example, Commerce officials said grant closeout delays can occur if grant reports are required to be reviewed by various jurisdictions—including universities and state and local governments—before the federal awarding agency. These officials noted that construction grant programs such as the Broadband Technology Opportunities Program (BTOP), a federal grant program to promote the expansion of broadband infrastructure, have a complex review process and have presented challenges for grantees in submitting reports in a timely manner. The officials said BTOP construction grant projects involve many equipment purchases that grantees must have reviewed through state and local jurisdictions. The officials explained that grant projects, such as the BTOP projects, that have a complex multi- jurisdictional review process require extra filings and additional grants management, which can cause grant closeout delays. Indirect costs represent a grantee’s general support expenses that cannot be specifically identified with an individual grant project. Indirect costs include, for example, building utilities and administrative staff salaries. To determine the share of indirect costs that may be charged to federally funded awards, grantees use a mechanism called the indirect cost rate, which may be applied to a portion of direct costs and is available to the grantee for a given period. A grantee may continue using a provisional rate until agreement is reached on a new final rate. The procedure for requesting an indirect cost rate varies based on the type of grantee organization, but it entails submitting an indirect cost rate proposal and negotiating with the grantee’s cognizant rate-setting agency. Agency officials told us negotiating an approved indirect cost rate can delay grant closeout. For example, NASA officials said grant closeout challenges can occur if the final approved rate differs from the provisional indirect cost rate used when the grant was awarded. The officials said that in those cases the grantee must make reporting adjustments based on the final approved rate for the period of performance. In some cases, NASA officials said negotiating a final indirect cost rate agreement can take years. OMB staff told us that one of the main reasons for grant closeout delays involves the complexity of finalizing indirect cost rate agreements. OMB staff explained that while a majority of grantees have negotiated indirect cost rate agreements, the status of the agreement, whether it is a provisional or a final negotiated indirect cost rate, can affect when a grantee can close out. OMB staff said if a rate is not finalized and the award has reached the end date, closing out a grant using a provisional rate may involve subsequent complexities. Indirect cost rate agreements are also an issue for closing grants at the National Institute of Standards and Technology (NIST). According to NIST officials, grantees have to negotiate a final rate in order for NIST to properly close an award, and grantees have to wait for their next audit cycle to be able to negotiate a final rate, which could be over a year after the award has expired. Consequently it may not be ideal to close out grants without a final rate. For example if the final indirect rate after closeout is lower, the grantee owes money to the federal government, and the agency would have to recover the funds. However, other agencies we spoke with did not view indirect cost rate agreements as a major challenge that delays grant closeout. We and the federal inspectors general (IG) have continued to report on grant closeout issues and actions taken by federal agencies. These reports, in addition to our 2008 and 2012 reports, identified grant closeout issues at federal award-making agencies and reported on agency progress addressing concerns with grant closeout activities. For example, in 2014 we reported that the Department of State (State) generally did not adhere to its policies and procedures relating to documenting internal-control activities such as grant closeout activities. Through a grants file review, we found four grant awards that State grant officers had closed without evidence that they had reviewed the recipient’s final reports. We recommended that grant officials complete all required documentation for all grants. State concurred with the recommendation and said it would increase the emphasis on the file documentation and will expand the extent of file reviews. In another example, in 2014 we issued a follow-up report on the status of recommendations we made in 2012 to Justice for its Bureau of Justice Assistance to improve grantee accountability in the use of federal funds. We found that Justice took actions to address our 2012 recommendations, including implementing an annual process to review and deobligate all undisbursed grant funds. Issues with final grantee reporting and awarding agency oversight of closeout procedures have also been cited in federal IG reports. Since 2012, IGs at the Departments of Agriculture, Commerce, Energy, Housing and Urban Development, Labor, and State, the United States Agency for International Development, and NASA have issued reports identifying grant closeout challenges experienced in their respective agencies. For example, the IGs at Agriculture and Energy found agency officials were waiting for final grantee reporting for expired grants but also noted that the agency failed to send a closeout letter as required by regulation and its standard operating procedures. Federal IGs have also reported on closeout issues related to oversight and internal controls. For example, an IG report from State cited significant deficiencies in the agency’s grant-management process including insufficient oversight caused by too few staff managing too many grants, insufficient training of grant officials, and inadequate documentation and closeout of grant activities. Two separate Agriculture IG reports highlight issues related to grant closeout oversight. A 2013 Agriculture IG report found one of its grant-making agencies had weaknesses in controls for deobgliating grant funds remaining after projects were completed. In a 2014 report, the Agriculture IG found that another of its grant-making agencies lacked effective controls, including the lack of established time frames and milestones for completing closeout reviews. The Commerce IG found that one of its grant-making agencies had incomplete grant closeout procedures and another had incomplete standard operating procedures. The Housing and Urban Development IG found that one of its grant-making agencies lacked adequate controls over the closeout process. This included written policies and procedures for management’s oversight to ensure that closeout data were consistently and accurately tracked and grants were closed in a timely manner. The NASA IG found NASA lacked a uniform closeout process and that the agency had not deobligated grant funds in a timely manner. In addition to untimely deobligation, the NASA IG found that the agency incurred unnecessary service fees associated with expired grants. The United States Agency for International Development IG found an agency grant program was not performing grant closeout procedures on schedule and that millions of dollars of grant funds sat idle while awaiting deobligation. The Labor IG reported that one of its grant- making agencies had delays in closing out expired grants because of agency resource constraints developed over several years. Agency officials told us grants management and payment systems with automated system closeout features provide agencies with useful methods to monitor, track, and close out expired grants. For example, HHS officials told us about an automated feature in PMS that had a positive impact in identifying late requests for payments for expired grants. HHS officials said the feature allows the system to flag payment requests for expired grants more than 90 days past the period of performance end date and require the awarding agency to approve the request. Justice officials told us its grant-awarding components use a combination of the grants management system (GMS) and financial systems to track undisbursed balances in expired grants. Justice officials told us that their systems, particularly GMS and the Community Oriented Policing Services (COPS) Office’s Enterprise Content Management System, helped Justice’s components reduce their closeout backlog. Justice officials cited a useful feature in both systems that includes an automatic notification to the grantee as the award approaches its period of performance end date. Commerce officials told us that the National Oceanic and Atmospheric Administration (NOAA) grants management system, Grants Online (GOL), offers automated features that assist agencies in managing grant closeout. For example, Commerce officials said GOL notifies the grantee of award expiration; due dates for final progress and financial reports; report overdue notices; and issues a copy of any enforcement actions. Commerce officials said GOL provides NOAA the ability to electronically review and complete deobligation memorandums, complete a closeout checklist, and view a workflow history that includes dates, times, and electronic signatures of progress and financial report submissions and approvals by grants managers. Commerce officials also noted that GOL provides the option to generate a report that tracks expired awards. NSF officials told us NSF implemented its new award payment management system, the Award Cash Management Service (ACMS), in 2013. NSF officials explained an advantage of using the new system is that ACMS provides award payment and expenditure detail upon each grantee drawdown of funds. NSF uses the payment and expenditure detail from ACMS to notify grantees that have grants with large undisbursed balances 3 months prior to the grants period of performance end date. NSF officials said this approach has improved the timeliness of no cost extension requests. NASA officials said the contractor responsible for the agency’s award closeout process uses a database tracking tool to facilitate grant closeout and track undisbursed balances. NASA officials said the contractor prepares monthly reports for NASA including reports that may highlight reasons why expired grants remain open. This can help inform NASA’s analysis of undisbursed balances and efforts to close out expired grants. Selected agencies have developed and implemented various policies to manage the grant closeout process. OMB’s Uniform Guidance clarified language on award closeout to help standardize federal agencies’ policies for the award closeout process. In their implementing regulations, some federal award-making agencies have provided additional language beyond the Uniform Guidance. This additional language provided more detail with respect to how these agencies intend to implement their award policies with regard to OMB’s new guidance. In addition to the additional guidance on federal award policies used by some agencies to complement the Uniform Guidance, agency officials told us about various internal policies and practices they implemented to manage the grant closeout process. Specifically, selected agencies incorporated expired grant review analyses and established goals to reduce the number of expired grants in their portfolios. For example, Justice officials told us that Office of Justice Programs (OJP) set an internal grant closeout goal to have no more than 10 percent or 250, whichever is greater, of all expired grants in a fiscal year extend 180 days past the period of performance end date. OJP officials told us they produce monthly and annual grant closeout analyses that track the number of expired grant awards and potential deobligation amounts. These reports compare awards past their period of performance end date by 120-179 days and 180 days or more and present the total number of expired grants that fall into these two categories. According to officials in Justice’s Community Oriented Policing Services (COPS) Office, they use a similar internal grant closeout goal to process at least 90 percent of all expired grants within 180 days of the grant performance end date. The office prepares an annual report to identify the total number of grants to be closed within 180 days of the grant end date. A quarterly analysis is completed to measure the number of expired grant awards that have completed the closeout process and the total grant funding that has been deobligated. Similarly, Commerce has established a Grants Council that serves as an informal working group that meets every 2 months to discuss intergovernmental grant policies and ways to improve Commerce’s grant business processes. Commerce officials told us that their bureaus provide the Grants Council information on their expired grants that are more than 180 days past the period of performance end date. At each Grants Council meeting, the council reviews closeout trends and any undisbursed balances in expired grants that are more than 180 days past the period of performance end date. Commerce officials said this process helps the council categorize the reasons why grants have not been closed. NSF officials also told us that they conduct an analysis at the end of each fiscal year to determine the number of awards which have expired and the amount of undisbursed balances remaining for each award. Compared to other agencies we contacted, NASA took a different approach to managing its grant closeout backlog. It uses a contractor to facilitate its award-closeout process and track undisbursed balances across the entire agency. NASA officials said the contractor coordinates with the NASA grants management office on all aspects of closeout. The contractor provides a monthly report to NASA’s Shared Service Center, which performs various transactional, administrative, and business functions including financial management for NASA grants, highlighting reasons grants remain open. According to NASA officials, using a contractor to facilitate the closeout process ensures consistency with award closeout across the agency. However, similar to other agencies, NASA conducts an analysis of expired grants. In addition, NASA’s Shared Service Center’s finance office conducts an analysis of undisbursed balances. NASA officials explained that since a 2014 IG review, the agency has evaluated and refined its award closeout process, reduced closeout duplication, and placed more emphasis on tracking expired grants. Officials from Commerce and Justice also told us their respective agencies have developed training guides for grantees or awarding agency grant managers. NOAA officials told us they conducted grants workshop training for grantees and provided training materials on grant closeout. For example, NOAA training materials provide information on grant closeout topics including Commerce closeout policies and regulations, responsibilities of the grantee and awarding agency, closeout checklists, definitions, and a list of closeout issues for grant managers to raise awareness for potential challenges. Justice also placed an emphasis on developing closeout training for grantees and awarding agency grant managers. Officials from OJP, the Office on Violence Against Women, and the COPS Office, which represent all of Justice’s grant-making agencies, said their respective offices provide closeout training and financial management training for grantees. For example, the OJP training guide covers closeout topics such as the OJP closeout process and time frame, key grant closeout terms and definitions, responsibilities, and a step-by-step guide to assist the grantee in using the closeout module in the GMS to close the grant. By developing training and information resources for both grantees and awarding agencies, these agencies are using training as a method of informing all grant stakeholders of the importance of a timely grant closeout. OMB’s 2012 Controller Alert advised grant-making agencies to consider establishing policy and procedures for unilaterally closing out expired grants. Selected agency officials also told us they have implemented various internal policies that address administrative obstacles which can cause grantees to fail to meet the terms of the agreement and delay grant closeout processing. For example, Commerce officials said they implemented a policy in 2013 that allowed the grants officer to carry out an administrative closeout of expired and unexpired awards when the recipient was no longer in existence or the recipient was unresponsive to attempts to make contact. OJP, COPS, and the Office on Violence Against Women use a similar policy to carry out an administrative closeout of grant awards in which the grantee was unable or unwilling to complete the requirements of the award. Under this policy, Justice officials said the closeout process was initiated by either the grant manager or automatically by GMS for an expired grant that reached 91 days past the grant period of performance end date. NASA has an administrative closeout policy that enables the grant officer to initiate a closeout within 270 days of the end of the period of performance if the recipient is not cooperating with the terms of the award. HHS officials explained that unilateral closeout exists as an option for closing awards, but it is used infrequently and judiciously. HHS officials noted that nuanced analysis and detailed communication should occur prior to exercising a unilateral closeout to avoid negatively impacting recipients that are cooperating. In situations where attempts to contact a recipient are unsuccessful, or when an agreement cannot be reached with a recipient, unilateral closeout is a tool of last resort when HHS operating divisions have not been successful in obtaining acceptable final reports from grant recipients. From fiscal year 2010 through 2016, the Appropriations Act for Commerce, Justice, Science and Related Agencies has required that OMB instruct any department, agency, or instrumentality of the United States receiving funds appropriated under the act to track undisbursed balances in expired grant accounts and include these balances in its annual performance plan and performance and accountability reports. In response to these laws, in 2010 and 2011 OMB issued instructions for tracking and reporting on undisbursed grant balances to these affected federal agencies. However, according to OMB staff and officials at the affected agencies we interviewed, OMB had not issued any instructions on reporting these balances since 2011, and OMB staff did not think that restating additional guidance was necessary given the language in the selected agencies’ appropriations acts. OMB staff explained that they did not conduct a detailed review of agency reporting of undisbursed balances in expired grant accounts. Our review included four of the affected agencies under this appropriations act—Commerce, Justice, NASA, and NSF. While all four agencies demonstrated that they have internal policies in place to review expired grants and undisbursed balances, only Justice and NSF have continually reported these undisbursed balances in expired grant accounts. NASA stopped reporting these balances in its annual performance reporting after fiscal year 2011, and Commerce stopped reporting them after fiscal year 2012. They both cited a lack of guidance from OMB related to reporting undisbursed balances in expired grants as the reason why they stopped reporting. Our previous work has found that reporting on the status of grant closeouts in annual performance reports can raise the visibility of untimely grant closeout within federal agencies. It can also lead to improvements in grant closeouts and reduce undisbursed balances in expired grant accounts. These reports help the President, Congress, and the American people assess agencies’ accomplishments for each fiscal year by comparing agencies’ actual performance against their annual performance goals, summarizing the findings of program evaluations completed during the year, and describing the actions needed to address any unmet goals. For example, in accordance with the appropriations act, both Justice and NSF have been reporting undisbursed balances in expired grant accounts in their agency performance reporting since 2010. These reports clearly point to the number of expired grants and the amount of related undisbursed grant balances. Specifically, in fiscal year 2012, NSF reported $184.5 million in undisbursed balances in almost 8,000 expired grants. In fiscal year 2015, NSF reported an undisbursed balance of $72.3 million in approximately 4,400 expired grant accounts. However, as we noted above, NASA and Commerce have not been reporting these balances as required since fiscal years 2011 and 2012, respectively. As a result of OMB not recognizing the language in the appropriations acts as a requirement to instruct agencies to report these balances and NASA and Commerce not recognizing the language in the appropriations acts to report, the undisbursed balances in expired grant accounts for NASA and Commerce have not been reported since fiscal years 2011 and 2012, respectively. In July 2012, OMB issued a Controller Alert that instructed agencies to take appropriate action to close out grants in a timely manner. This alert asked all Chief Financial Officer Act agencies to consider the following: determine what closeout means for their programs; focus on closing out grants several years past their end date or having no remaining funds; establish policies and procedures to unilaterally close out grants; leverage internal control procedures outlined in OMB Circular No. A- 123, Management’s Responsibility for Internal Control, to minimize risk associated to not closing out grants in a timely manner; and monitor closeout activity to track progress in reducing the closeout backlog. In December 2013, OMB consolidated its grants management circulars into a single document, OMB’s Uniform Guidance. This consolidated guidance included some changes and clarifications for federal awarding agencies and non-federal entities receiving grant funds on grant closeout time frames and final adjustments to grantee reimbursable expenses. The major change to grant closeout in the Uniform Guidance established the period of 1 year for federal awarding agencies to complete all closeout actions. It also clarifies that this closeout period begins after receipt and acceptance of all required final reports. While the 2012 Controller Alert and the Uniform Guidance take steps in the right direction to increase government-wide awareness related to timely grant closeout, these alerts and guidance documents continue to lack instruction on tracking and reporting undisbursed balances in grant accounts eligible for closeout, as we recommended in 2008. At that time, in response to our recommendation, OMB stated that it did not believe that having agencies report on these balances in their performance reporting would reduce undisbursed balances in expired grant accounts. According to federal standards for internal control, information should be recorded and communicated to management and others within the entity who need it. It should also come in a form and within a time frame that enables the entity to carry out its internal control and other responsibilities. Encouraging effective external communication on undisbursed balances in expired grant accounts would serve to further transparency efforts and promote informed decision making. Grant closeout is an important final point of accountability for grantees that helps to ensure they have met all financial requirements and have provided final reports as required. Closing out grants also allows agencies to identify and redirect unused funds to other projects and priorities as authorized or to return unspent balances to the Treasury. We have previously reported that agencies can improve their grant closeout process when they direct their attention to the issue and make timely grant closeout a high priority. We have also stated that tracking undisbursed balances in expired grant accounts, including the status of grant closeouts on annual performance reports, could raise the visibility of the problem within individual agencies and across the federal government. However, our analysis of PMS data indicates that expired grant accounts, and, in some cases, the undisbursed balances associated with these expired accounts, persisted as an issue for agencies in 2015. The number of expired grant accounts decreased since we last reported these data in 2012. However, the amount of undisbursed balances in expired grant accounts has increased. HHS has not effectively used PMS data to help target agency efforts toward closing accounts that have the largest undisbursed balances and is missing an opportunity to help agencies significantly reduce undisbursed balances and deobligate funds from expired grant accounts, consistent with its grant closeout policies to close out grants within 270 days of their expiration. In January 2016, the GONE Act was signed into law. This act requires government-wide reporting of undisbursed balances in certain expired grants and provides a basis for government-wide reporting of these balances. In addition, OMB’s implementation of Section 537 of the Commerce, Justice, Science, and Related Agencies Appropriations Act of 2010 and subsequent annual legislation creates a framework for such reporting. However, affected agencies have varied in the reporting of these balances and have said they need clearer instructions from OMB on where and how to report these balances. Effective implementation of grant closeout requirements in the GONE Act and appropriations acts depends on clear instructions for where and how to report this information and on appropriate agency policies for timely closeout. Improved grant closeout processes can in turn allow federal agencies to make better use of their appropriated funds. 1. The Director of OMB resume instructing all affected and independent agencies receiving funds under the 2016 appropriations act to track and report undisbursed balances in expired grant accounts. 2. The Administrator of NASA resume reporting on the undisbursed balances in expired grant accounts in the agency’s annual performance reporting. 3. The Secretary for the Department of Commerce resume reporting on the undisbursed balances in expired grant accounts in the agency’s annual performance reporting. 4. The Secretary of HHS, to improve the enforcement of existing HHS Grant Policy Manual grant closeout guidance and reduce undisbursed balances in expired grant accounts, require all HHS grant-making operating divisions to take the following two actions: identify grants expired for more than 1 year past their period of performance end date and to either close the expired grant in the Payment Management System or to determine why these grants are not closed; and identify expired grant accounts designated for immediate closure by the Payment Management System and require these operating divisions to close the expired grant accounts in PMS or explain why these grants are not closed. We provided a draft of this report to the Director of the Office of Management and Budget; the Administrator of the National Aeronautics and Space Administration; the Assistant Attorney General for Administration of the Department of Justice; the Office of the Director of the National Science Foundation; and the Secretaries of the Departments of Commerce, Health and Human Services, and the Treasury. Commerce, HHS, NASA and OMB responded with written comments, which we have reprinted in appendixes III, IV, V, and VI, respectively. OMB concurred with the recommendation and said it will issue guidance to the heads of the Selected Agencies to report on balances in expired grant accounts in the agencies’ fiscal year 2016 performance and accountability reports. In addition, OMB stated it is currently analyzing the requirements of the GONE Act and will be issuing government-wide guidance on the reporting of the required information by July 2016. NASA partially concurred with GAO’s recommendation, saying it would resume reporting on undisbursed balances in expired grant accounts once guidance was provided by OMB. Commerce concurred with GAO’s recommendation. HHS concurred with GAO’s recommendations and said it will implement strategies, in accordance with the recommendations, to address and reduce grant closeout delays and further enhance compliance with HHS’ Grants Policy Administrative Manual. Staff from OMB, HHS, Justice, NSF, and Commerce also provided technical comments, which were incorporated as appropriate. NASA and Treasury had no technical comments. We are sending copies of this report to the heads of the Departments of Health and Human Services, Treasury, OMB, Commerce, Justice, NASA, and NSF, as well as interested congressional committees and other interested parties. This report will also be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix VII. This report examines (1) the extent to which there are undisbursed balances remaining in expired grant accounts in the federal government’s largest civilian payment management system—the Payment Management System (PMS) administered by the Department of Health and Human Services (HHS); (2) the reasons selected agencies give for their grant accounts remaining open past their end dates; and (3) the extent to which the Office of Management and Budget (OMB) monitors agency progress with regard to tracking and reporting on undisbursed balances in expired grant accounts. To address the first objective, we analyzed data from PMS administered by HHS’s Program Support Center (PSC). Federal payment systems facilitate the transfer of cash payments from federal awarding agencies to grantees. Some agencies make grant payments directly to grantees using their own proprietary payment systems. Others enter into arrangements with payment systems that serve multiple agencies to make payments on their behalf. PMS was selected based on the following criteria: PMS provides payment services to other federal departments and entities. In 2015, offices from 12 federal departments and agencies used PMS for making grant disbursements. PMS accounts for a significant percentage of civilian federal grant disbursements. Based on fiscal year 2014 data, the most recent available at the time of our selection, PMS made about $429 billion in grant disbursements, or 74 percent of all civilian federal grants disbursements in fiscal year 2014. PMS is a centralized grant payment and cash management system, operated by PSC in the Division of Payment Management (DPM) at HHS. According to DPM, the main purpose of PMS is to serve as the fiscal intermediary between awarding agencies and the recipients of grants and contracts. Its main objectives are to expedite the flow of cash between the federal government and recipients, transmit recipient disbursement data back to the awarding agencies, and manage cash flow advances to grant recipients. PSC personnel operate PMS, making payments to grant recipients, serving as user/recipient liaisons, and reporting disbursement data to awarding agencies. Awarding agencies’ responsibilities include entry of authorization data into PMS, program and grant monitoring, grant closeout, and reconciliation of their accounting records to PMS information. Awarding agencies pay PSC a service fee for maintaining accounts and executing payments through PMS. PMS continues to charge agency customers a servicing fee until a grant account is closed. To update our previous analysis of undisbursed balances in expired grant accounts and provide a degree of comparability, we replicated the methodology used in our 2008 and 2012 reports. To determine the amount of undisbursed balances in expired grant accounts, we analyzed PMS data from closeout reports PSC makes available to PMS customers each quarter. These closeout reports list all expired grant accounts that, according to the data system, have not completed all of their closeout procedures. A grant account is considered expired in PMS if (1) the grant end date is more than 3 months old, and (2) the latest date of disbursement was at least 9 months old. PMS does not close a grant account until instructed to do so by the awarding agency. For each grant account, the report includes such information as the identification number, the amount of funding authorized for the grant, the amount disbursed, and the beginning and end dates for the grant. The grant end date is a mandatory field completed by the awarding agency. PSC provided us with the PMS quarterly closeout report for the end of fiscal year 2015 (September 30, 2015). PSC appended to the closeout data an additional field showing the applicable number from the Catalog of Federal Domestic Assistance (CFDA) for each grant account. We used the CFDA number provided by PSC to help determine which accounts to exclude from our analysis. The purpose of these exclusions was to avoid including accounts that would distort the calculation of undisbursed funds in expired PMS grant accounts and to provide comparability with our previous findings. Our criteria for excluding accounts were consistent with the methodology we used in our 2008 and 2012 reports. We excluded 127 programs from our review. We included programs that were grants or cooperative agreements; had a time limit for spending; had a zero or positive undisbursed balance; had a readily identifiable CFDA number and program description in did not have special financial reporting procedures. For reporting purposes, we separated data into two sets of expired grant accounts: (1) one set consisted of expired accounts for which all of the funds made available had been disbursed and (2) a second set of accounts included expired accounts with a positive undisbursed balance. To obtain an estimate of the total amount of fees paid for maintaining accounts with no undisbursed balances remaining, we requested data from PSC for all accounts that appear on the year-end fiscal year 2015 closeout report (i.e., as of September 30, 2015) with a unique accounting status symbol indicating that no undisbursed balances remained and that the awarding agency only needed to submit the final closeout code to PSC to finalize grant closeout. According to data provided by PSC, PMS users were charged a total of roughly $29,000 September 2015 to maintain more than 5,900 expired grant accounts with no undisbursed balances remaining listed on the year-end closeout report. To test the reliability of PMS closeout data, we (1) reviewed existing documentation related to PMS, including the most recent audit of the design and operating effectiveness of the system’s controls, (2) interviewed officials responsible for administration of the database on data entry and editing procedures and the production of closeout reports, and (3) conducted electronic testing for obvious errors in completeness and accuracy. We discussed with HHS officials any known limitations associated with the data. According to HHS officials, no-cost extensions that extend the grant period without changing the authorized amount of funding may not be reflected in PMS data. As a result, PMS closeout reports may include grants that have received an extension and are therefore not eligible for closeout. No obvious errors in completeness and accuracy were identified during electronic testing. After conducting these assessment steps, we found that the PMS closeout data were sufficiently reliable for the purposes of this report. In our 2012 report, we included an analysis of data contained in “dormant account reports” that were provided to users of a second payment system, the Automated Standard Application for Payments (ASAP), administered by the Department of the Treasury (Treasury) and the Federal Reserve Bank. We were unable to include the results of a similar analysis of the data contained in the most recent dormant account report (dated October 2015) because the number of grant accounts identified was below the threshold at which we could report without potentially identifying the recipients. To address our second objective and identify why selected agencies have expired grants open past their end dates, we collected and reviewed audit reports from GAO and inspectors general (IG) at the 24 Chief Financial Officers Act (CFO Act) agencies that focused on undisbursed funds in expired grants accounts and grant closeout. We reviewed IG reports from the 24 CFO Act agencies from September 2011 to March 2015 in order to provide coverage of the major grant-making agencies and because this approach updated the review we performed as part of the work on our 2008 and 2012 reports, which included IG reports issued between 2000 and 2011. We also reviewed annual performance reports for all 24 agencies to determine how the major grant-making agencies were reporting on undisbursed balances in expired grant accounts. We also reviewed available data on grant closeout from selected agencies. We interviewed agency officials at selected grant-making agencies to determine reasons why grants remain open past their end date and determine to what extent there are promising practices related to grant closeout. Agencies selected for interviews reflect a mix of PMS customers, one from within HHS that, according to preliminary analysis of March 31, 2015 PMS data, had a large undisbursed balance. We also interviewed officials from the United States Department of Commerce, the United States Department of Justice, the National Aeronautics and Space Administration, and the National Science Foundation as agencies that have been instructed through their appropriations acts since 2010 to report on undisbursed balances in expired grant accounts. To address our third objective and understand the extent to which OMB monitors agency progress with regard to tracking and reporting progress on reducing undisbursed balances in expired grant accounts, we reviewed relevant OMB guidance, including: Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards (Uniform Guidance) and OMB Circular No. A-136, Financial Reporting Requirements. We reviewed two memorandums related to tracking undisbursed balances in expired grant accounts issued by OMB in 2010 and 2011 to selected agencies receiving funding under the Commerce, Justice, Science, and Related Agencies appropriations act, and the related appropriations acts from 2010 through 2016. We also reviewed an OMB 2012 Controller Alert focused on timely grant closeout. We interviewed OMB staff on providing government-wide guidance for grants management and grant closeout and officials at the four agencies that reported undisbursed balances in expired grant accounts in 2010 and 2011 annual performance reports—the United States Department of Commerce, the United States Department of Justice, the National Aeronautics and Space Administration, and the National Science Foundation—to discuss their implementation of OMB’s instructions. In addition to the contact named above, Thomas M. James (Assistant Director), Keith O’Brien (Analyst-in-Charge), Sara L. Daleski, Bertha Dong, and Michael Sweet made major contributions to this report. Other key contributors include Joy Booth, Michael Grogan, Kimberly McGatlin, Robert Robinson, and Cynthia Saunders. The Government Accountability Office, the audit, evaluation, and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s website (http://www.gao.gov). Each weekday afternoon, GAO posts on its website newly released reports, testimony, and correspondence. To have GAO e-mail you a list of newly posted products, go to http://www.gao.gov and select “E-mail Updates.” The price of each GAO publication reflects GAO’s actual cost of production and distribution and depends on the number of pages in the publication and whether the publication is printed in color or black and white. Pricing and ordering information is posted on GAO’s website, http://www.gao.gov/ordering.htm. Place orders by calling (202) 512-6000, toll free (866) 801-7077, or TDD (202) 512-2537. Orders may be paid for using American Express, Discover Card, MasterCard, Visa, check, or money order. Call for additional information. Connect with GAO on Facebook, Flickr, Twitter, and YouTube. Subscribe to our RSS Feeds or E-mail Updates. Listen to our Podcasts and read The Watchblog. Visit GAO on the web at www.gao.gov. Please Print on Recycled Paper.
In 2008 and 2012, GAO reported on hundreds of millions of dollars in undisbursed balances in expired grant accounts in the largest civilian payment system for grants, PMS. GAO was asked to update this work. GAO examined: (1) the extent to which there are undisbursed balances remaining in expired grant accounts in PMS; (2) the reasons selected agencies give for their grant accounts remaining open past their end dates; and (3) the extent to which OMB monitors agency progress with regard to tracking and reporting undisbursed balances in expired grant accounts. To do this, GAO analyzed PMS data as of September 30, 2015; reviewed audit reports issued by GAO and federal inspectors general, agency performance reports, and OMB guidance; and interviewed agency officials from OMB, HHS, Commerce, Justice, NASA and NSF. GAO found approximately $994 million in funding remained in expired grant accounts in the Payment Management System (PMS), operated by the Department of Health and Human Services (HHS), at the end of fiscal year 2015. PMS identifies expired grant accounts for users as those accounts more than 3 months past their grant end date that have not had payment activity for 9 months. PMS makes payments for 12 federal entities and about 77 percent of all federal civilian grant payments. GAO’s analysis of the September 30, 2015, PMS data indicated the following: The total undisbursed balance increased by approximately $200 million from what GAO reported for the end of fiscal year 2011. However, the number of expired grant accounts with undisbursed balances decreased to 8,832 in 2015 compared to 10,548 in 2011. More than half the accounts exceeded their expiration date by 1 to 3 years. A relatively small number of expired grant accounts represented more than half of the total undisbursed balance—151 grants represented $514.7 million in undisbursed balances. HHS grant accounts in PMS comprised approximately $651 million (66 percent) of the identified undisbursed balances in expired grant accounts. The number of expired grant accounts with no undisbursed balance that PMS flagged for closeout dropped to 5,906 from the more than 28,000 accounts GAO reported for the end of fiscal year 2011. HHS accounts made up 66 percent of the 5,906 expired accounts flagged for closeout, which is inconsistent with HHS policy. Agency officials told GAO that closeout delays can occur for a number of reasons, including grantee failure to submit final financial and performance reports and agency failure to review, process, and reconcile grantees’ final reporting in a timely manner.These agency officials included staff from HHS, the largest PMS user, and the Departments of Commerce (Commerce) and Justice (Justice), the National Aeronautical and Space Administration (NASA), and the National Science Foundation (NSF), which previously reported undisbursed balances for expired grants. These agencies have implemented various internal policies to elevate the issue of timely grant closeout internally, including developing internal working groups that set grant closeout goals and analyzing the number of expired grants not yet closed out. Since 2010, Congress has required the Office of Management and Budget (OMB) to specifically instruct Commerce, Justice, NASA, and NSF to track and provide information onbalances in expired grant accounts. OMB has not issued this guidance since 2011, and NASA and Commerce have not reported on these balances since 2011 and 2012, respectively. OMB staff did not think that it was necessary to restate the requirements given the language in these agencies’ appropriations acts. OMB updated its guidance on grant closeout in December 2013. However, OMB did not specify procedures for tracking and reporting undisbursed balances in expired grant accounts. GAO recommends that OMB resume instructing agencies to report undisbursed balances for expired grant accounts as required under their respective appropriations acts and that NASA and Commerce resume reporting on these balances. GAO also recommends that HHS require its grant-making operating divisions to identify grants expired more than 1 year past their period of performance end date and close those grants or determine why they are not closed, consistent with agency policy. OMB and the agencies agreed with the recommendations.
ILOVEYOU is both a “virus” and “worm.” Worms propagate themselves through networks; viruses destroy files and replicate themselves by manipulating files. The damage resulting from this particular hybrid— which includes overwhelmed e-mail systems and lost files–is limited to users of the Microsoft Windows operating system. ILOVEYOU typically comes in the form of an e-mail message from someone the recipient knows with an attachment called LOVE-LETTER- FOR-YOU.TXT.VBS. The attachment is a Visual Basic Script (VBS) file.As long as recipients do not run the attached file, their systems will not be affected and they need only to delete the e-mail and its attachment. When opened and allowed to run, however, ILOVEYOU attempts to send copies of itself using Microsoft Outlook (an electronic mail software program) to all entries in all of the recipient’s address books. It attempts to infect the Internet Relay Chat (IRC) programso that the next time a user starts “chatting” on the Internet, the worm can spread to everyone who connects to the chat server. It searches for picture, video, and music files and attempts to overwrite or replace them with a copy of itself. In addition, the worm/virus further attempts to install a password-stealing program that would become active when the recipient opened Internet Explorerand rebooted the computer. However, Internet accounts set up to collect to stolen passwords were reportedly disabled early in the attack. The worm/viruses also appeared in different guises–labeled as “Mother’s Day,” “Joke,” “Very Funny,” among others. These variants retriggered disruptions because they allowed the worm/virus to bypass filters set up earlier to block ILOVEYOU. At least 14 different versions of the virus have been identified, according to the Department of Defense’s (DOD) Joint Task Force-Computer Network Defense. One, with the subject header “VIRUS ALERT!!!”, was reportedly even more dangerous than the original because it was also able to overwrite system files critical to computing functions. The difference between ILOVEYOU and other recent viruses, such as the Melissa virus, which surfaced about this time last year, is the speed at which it spread. Soon after initial reports of the worm/virus surfaced in Asia on May 4, ILOVEYOU proliferated rapidly throughout the rest of the world. By 6 p.m. the same day, Carnegie Mellon’s CERT Coordination Center (CERT-CC)had received over 400 direct reports involving more than 420,000 Internet hosts. One reason ILOVEYOU multiplied much faster than Melissa was that it came during the work week, not the weekend. Moreover, ILOVEYOU sent itself to everyone on the recipient’s e-mail lists, rather than just the first 50 addressees as Melissa did. The following two figures provide a more detailed overview of the timelines associated with the introduction of the virus and the subsequent discovery and notification actions taken by various entities. In addition to hitting most federal agencies—discussed later in my statement—the worm/virus affected large corporations, such as AT&T, TWA, and Ford Motor Company; media outlets, such as the Washington Post, Dow Jones, and ABC news; state governments; school systems; and credit unions, among many others, forcing them to take their networks off- line for hours. Internationally, the virus affected businesses, organizations, and governments, including the International Monetary Fund, the British Parliament, Belgium’s banking system, and companies in the Baltics, Denmark, Italy, Germany, Norway, the Netherlands, Sweden, and Switzerland. The bottom line in terms of damage is still uncertain. Initial estimates of damage from the outbreak ranged from $100 million to over $10 billion globally. We do not have a basis for commenting on overall loss. While press reports are full of anecdotal accounts from disparate sectors of the economy, it is difficult to reliably and precisely estimate factors such as loss of productivity, lost opportunity costs, reductions in customer confidence, slow down of technical staff, and loss of information. Furthermore, as with most security incidents, companies affected are not likely to fully disclose the true extent of their losses. Recognizing the increasing computer-based risks to our nation’s critical infrastructures, the federal government has taken steps over the past several years to create capabilities for effectively detecting, analyzing, and responding to cyber-based attacks. However, the events and responses spawned by ILOVEYOU demonstrate both the challenge of providing timely warnings against information-based threats and the increasing need for the development of national warning capabilities. The National Infrastructure Protection Center (NIPC), located in the Federal Bureau of Investigation, is responsible for serving as the focal point in the federal government for gathering information on threats as well as facilitating and coordinating the federal government’s response to incidents affecting key infrastructures. Presidential Decision Directive 63 (PDD 63) which was signed in May 1998, also specifically charged the NIPC with issuing attack warnings as well as alerts to increases in threat condition. This includes warnings to private sector entities. Developing the capability to provide early warning of imminent cyber- based threats is complex and challenging but absolutely essential to the assigned NIPC mission. Data on possible threats—ranging from viruses, to hoaxes, to random threats, to news events, and computer intrusions— must be continually collected and analyzed from a wide spectrum of globally distributed sources. Moreover, once an imminent threat is identified, appropriate warnings and response actions must be effectively coordinated among federal agencies, the private sector, state and local governments, and even other nations. It is important that this function be carried out as effectively, efficiently, and quickly as possible in order to ensure continuity of operations as well as minimize disruptions. To date, the NIPC has had some success in providing early warning about impending threats. For example, in December 1999, it posted warnings about a rash of denial-of-service attacks prominently on its website and it offered a tool that could be downloaded to scan for the presence of the denial-of-service code. Two months later, the attack arrived in full force, compromising the services of Yahoo, E-Bay, and other Internet companies. However, the NIPC had less success with the ILOVEYOU virus. As noted earlier (in figure 1), the NIPC first learned of the virus at 5:45 a.m. EDT from an industry source. Over the next 2 hours, the NIPC checked other sources in attempts to verify the initial information with limited success. According to NIPC officials, no information had been produced by intelligence, Defense, and law enforcement sources, and only one reference was located in open sources, such as Internet websites. The NIPC considers assessment of virus reports to be an important step before issuing an alert because most viruses turn out to be relatively harmless or are detected and defeated by existing antivirus software. According to the NIPC, the commercial antivirus community identifies about 20 to 30 new viruses every day, and more than 53,000 named viruses have been identified to date. At 7:40 a.m., two DOD sources notified the NIPC that the virus was spreading through the department’s computer systems, and the NIPC immediately notified the Federal Computer Incident Response Center (FedCIRC), at GSA, and CERT-CC. FedCIRC then undertook a rigorous effort to notify agency officials via fax and phone. For many agencies, this was too late. In fact, only 2 of the 20 agencies we spoke with reported that they first learned of the virus from FedCIRC. Twelve first found out from their own users, three from vendors, two from news reports, and one from colleagues in Europe. NIPC did not issue an alert about ILOVEYOU on its own web page until 11 a.m., May 4—hours after many federal agencies were reportedly hit. This notice was a brief advisory; the NIPC website did not offer advice on dealing with the virus until 10 p.m. that evening. For the most part, agencies themselves responded promptly and appropriately once they learned about the virus. In some cases, however, getting the word out was difficult. At DOD, for example, the lack of teleconferencing capability slowed the JTF-CND response because Defense components had to be called individually. At the Department of Commerce, cleanup and containment efforts were delayed because many of the technical support staff had not yet arrived at work when users began reporting the virus. The National Aeronautics and Space Administration (NASA) also had difficulty communicating warnings when e-mail services disappeared. And while backup communication mechanisms are in place, NASA officials told us that they are rarely tested. Justice officials similarly learned that the department needed better alternative methods for communicating when e-mail systems are down. Additionally, many agencies initially tried to filter out reception of the malicious “ILOVEYOU” messages. However, in doing so, some also filtered out e-mail alerts and communications regarding incident handling efforts that referred to the virus by name. Lastly, we found that the few federal components that either discovered or were alerted to the virus early did not effectively warn others. For example, Treasury told us that the U.S. Customs Service received an Air Force Computer Emergency Response Team (AFCERT) advisory early in the morning of May 4, but that Customs did not share this information with other Treasury bureaus. The lack of more effective early warning clearly affected most federal agencies. Only 7 of the 20 agencies we contacted were spared widespread infection, and this was largely because they relied on e-mail software other than Microsoft Outlook. Of the remaining agencies, the primary impact was e-mail disruption, which, in turn, slowed some agency operations and required agencies to divert technical staff toward stemming the virus’ spread and cleaning “infected” computers. Of course, if an agency’s business depends on e-mail for decision-making and service delivery, then the virus/worm probably had a significant impact on day-to-day operations in terms of lost productivity. While most agencies experienced disruptions of e-mail service for a day or less, eight agencies or agency components reported experiencing disruptions of longer than 1 day. I would like to offer some highlights of our discussions with officials at individual agencies since they further complete the picture of the response efforts and damage resulting from ILOVEYOU. The Department of Health and Human Services (HHS) was inundated with about 3 million malicious messages. Departmental components experienced disruptions in e-mail service ranging from a few hours to as many as 6 days, and departmentwide e-mail communication capability was not fully restored until May 9. An HHS official observed that “if a biological outbreak had occurred simultaneously with this ‘Love Bug’ infestation, the health and stability of the Nation would have been compromised with the lack of computer network communication.” At DOD, enormous efforts were expended containing and recovering from this virus. Military personnel from across the department were pulled from their primary responsibilities to assist. One DOD official noted that if such an attack were to occur over a substantial amount of time, reservists would have to be called for additional support. Some DOD machines required complete software reloads to overcome the extent of the damage. At least 1,000 files at NASA were damaged. While some files were recovered from backup media, others were not. At the Department of Labor, recovery required over 1,600 employee hours and over 1,200 contractor hours. The Social Security Administration required 5 days to become fully functional and completely remove the virus from its systems. The Department of Energy experienced a slowdown in external e-mail traffic, but suffered no disruption of mission-critical systems. Ten to 20 percent of DOE’s machines nationwide required active cleanup. A vendor’s 7:46 a.m. EDT warning to the Federal Emergency Management Agency enabled officials there to mitigate damage by restricting the packet size allowed through its firewalls until the necessary virus prevention software could be upgraded. As of May 10, the Veterans Health Administration (VHA) had received 7,000,000 “ILOVEYOU” messages, compared to a total of 750,000 received during the Melissa virus episode. VHA spent about 240 man hours to recover from the virus. The Department of Justice estimated spending 80 regular labor hours and 18 overtime hours for cleanup. Some of Treasury’s components required manual distribution of updated virus signature files because automated means for rollout of software updates were not in place. The Department of Agriculture could not obtain the updated antivirus product it needed until after 1 p.m., in part because it had to compete with all of the vendor’s other customers worldwide to obtain the updates. Effective user awareness programs were cited at the Department of Commerce, Treasury’s Bureau of Public Debt, and the Department of Justice, where many infected messages were received but few were executed because users tended to be suspicious of unexpected and unusual e-mail messages and were not likely to open them. Mr. Chairman, in many respects the federal government has been lucky. Even though ILOVEYOU and Melissa were disruptive, key government services remained largely operational through the events. However, the potential for more catastrophic damage is significant. Official estimates show that over 100 countries already have or are developing computer attack capabilities. Hostile nations or terrorists could use cyber-based tools and techniques to disrupt military operations, communications networks, and other information systems or networks. The National Security Agency has acknowledged that potential adversaries are developing a body of knowledge about U.S. systems and about methods to attack these systems. According to Defense officials, these methods, which include sophisticated computer viruses and automated attack routines, allow adversaries to launch untraceable attacks from anywhere in the world. According to a leading security software designer, viruses in particular are becoming more dangerous to computer users. In 1993 only about 10 percent of known viruses were considered destructive, harming files and hard drives. But now about 35 percent are regarded as harmful. Such concerns highlight the need to improve the government’s capacity and capability for responding to virus attacks. Clearly, more needs to be done to enhance the government’s ability to collect, analyze, and distribute timely information that can be used by agencies to protect their critical information systems from possible attack. In the ILOVEYOU incident, NIPC and FedCIRC, despite their efforts, had only a limited impact on agencies being able to mitigate the attack. At the same time, agencies can also take actions that would improve their ability to combat future virus attacks. For example, they can act to increase user awareness and understanding regarding unusual and suspicious e-mail and other computer-related activities. In particular, agencies can teach computer users that e-mail attachments are not always what they seem and that they should be careful when opening them. Users should never open attachments whose filenames end in “.exe” unless they are sure they know what they are doing. Users should also know that they should never start a personal computer with an unscanned floppy disk or CD-ROM in the computer drive. Strengthening intrusion detection capabilities may also help. Clearly, it is difficult to sniff out a single virus attached to an e-mail coming in but if 100 e-mails with the same configuration suddenly arrive, an alert should be sounded. Furthermore, agencies can clarify policies and procedures for reporting and responding to unusual events and conduct “dry runs” on these procedures. They can ensure that up-to-date virus detection software has been installed on their systems. They can establish effective alternative communication mechanisms to be used when e-mail systems are not operating properly. And they can participate in interagency efforts to prepare for and share information on cyber threats, such as those sponsored by FedCIRC. While such actions can go a long way toward helping agencies to ward off future viruses, they will not result in fully effective and lasting improvements unless they are supported by strong security programs on the part of individual agencies and effective governmentwide mechanisms and requirements. As noted in previous testimonies and reports, almost every federal agency has poor computer security. Federal agencies are not only at risk from computer virus attacks, but are also at serious risk of having their key systems and information assets compromised or damaged from both computer hackers as well as unauthorized insiders. We have recommended that agencies address these concerns by managing security risks on an entitywide basis through a cycle of risk management activities that include assessing risks and determining protection needs, selecting and implementing cost-effective policies to meet those needs, promoting awareness of policies and controls, and implementing a program of routine tests and examinations for evaluating the effectiveness of these tools. At the governmentwide level, this involves conducting routine periodic independent audits of agency security programs; developing more prescriptive guidance regarding the level of protection that is appropriate for their systems; and strengthening central leadership and coordination of information security related activities across government. Mr. Chairman, this concludes my statement. The ILOVEYOU virus attack will not be our last incident. We hope it will provide an opportunity to examine our processes for developing threat assessments and providing warnings as well as an opportunity to examine our overall security posture. We performed our review from May 8 through May 17, 2000, in accordance with generally accepted government auditing standards. For information about this testimony, please contact Jack L. Brock, Jr., at (202) 512-6240. Jean Boltz, Cristina Chaplain, Nancy DeFrancesco, Mike Gilmore, Danielle Hollomon, Paul Nicholas, and Alicia Sommers made key contributions to this testimony. (511999)
Pursuant to a congressional request, GAO discussed the ILOVEYOU computer virus, focusing on measures that can be taken to mitigate the effects of future attacks. GAO noted that: (1) ILOVEYOU is both a virus and a worm; (2) worms propagate themselves through networks, and viruses destroy files and replicate themselves by manipulating files; (3) the damage resulting from this hybrid is limited to users of the Microsoft Windows operating system; (4) ILOVEYOU typically comes in the form of an electronic mail (e-mail) message from someone the recipient knows; (5) when opened and allowed to run, the virus attempts to send copies of itself to all entries in all of the recipient's address books; (6) soon after initial reports of the virus surfaced in Asia, the virus proliferated rapidly throughout the rest of the world; (7) recognizing the increasing computer-based risks to the nation's critical infrastructures, the federal government has taken steps over the past several years to create capabilities for effectively detecting, analyzing, and responding to cyber-based attacks; (8) however, the events and responses spawned by ILOVEYOU demonstrate both the challenge of providing timely warnings against information based threats and the increasing need for the development of national warning capabilities; (9) the National Infrastructure Protection Center (NIPC) is responsible for serving as the focal point in the federal government for gathering information on threats as well as facilitating and coordinating the federal government's response to incidents impacting key infrastructures; (10) once an imminent threat is identified, appropriate warnings and response actions must be effectively coordinated among federal agencies, the private sector, state and local governments, and other nations; (11) NIPC has had some success in providing early warnings on threats, but had less success with the ILOVEYOU virus; (12) for over 2 hours after NIPC first learned of the virus, it checked other sources in attempts to verify the initial information, with limited success; (13) NIPC did not issue an alert about ILOVEYOU on its own web page until hours after federal agencies were reportedly hit; (14) agencies themselves responded promptly and appropriately once they learned about the virus; (15) GAO found that the few federal components that either discovered or were alerted to the virus early did not effectively warn others; (16) to prevent future virus attacks, agencies can teach computer users that e-mail attachments are not always what they seem and that they should be careful when opening them; and (17) agencies can ensure that up-to-date virus detection software has been installed on their systems.
VA provides nursing home care for some veterans, as required, and makes these services available to other veterans on a discretionary basis, as resources permit. Specifically, VA is required by law to provide nursing home care to any veteran who needs it for a service-connected disability and to any veteran who needs it and has a service-connected disability rated at 70 percent or greater. However, VA provides most of its nursing home care to veterans on a discretionary basis, as resources permit. VA’s policy on nursing home eligibility requires that VA networks provide nursing home care to veterans with 60 percent service-connected disability ratings who are either unemployable or who have been determined by VA to be permanently and totally disabled. For all other veterans, VA’s policy is to provide nursing home care on a discretionary basis, with certain veterans having higher priority, including veterans who require care following a hospitalization. CLCs provide both short-stay (90 days or less) and long-stay (more than 90 days) services. According to VA data, almost 94 percent of the residents admitted to CLCs in fiscal year 2010 were short-stay. Short-stay care in CLCs includes skilled nursing care, rehabilitation, restorative care, maintenance care for those awaiting alternative placement, hospice, and respite care. The remaining admissions, about 6 percent, were long-stay. Long-stay care includes dementia care, maintenance care, and care for those with spinal cord injury and disorders. Responsibility for VA’s medical facilities, including CLCs, rests with both VA’s networks and VA headquarters. Almost all of VA’s 132 CLCs, located throughout VA’s 21 networks, are colocated with or in close proximity to a VA medical center (VAMC). While networks are charged with the day-to-day management of the VAMCs within their network, VA headquarters maintains responsibility for establishing national policy and overseeing both networks and VAMC operations. Within VA headquarters, Geriatrics and Extended Care is responsible for developing VA’s policies and other national actions related to the quality of care and quality of life in VA’s CLCs. The Office of the Deputy Under Secretary for Health for Operations and Management, through each network, ensures that VAMCs, including CLCs, comply with VA’s policies and implement other national actions. The LTCI contract, which began in September 2010, is for 1 year, and provides for LTCI to conduct reviews between September 2010 and August 2011. VA may exercise an option to renew for each of 4 additional years through August 2015. Officials from both Geriatrics and Extended Care and the Office of the Deputy Under Secretary for Health for Operations and Management share responsibility for administering VA’s contract with LTCI. LTCI uses the Centers for Medicare & Medicaid Services’ scope and severity scale for classifying nursing home deficiencies. There are four severity classifications, with the least serious deficiencies rated as having the potential for minimal harm and the most serious deficiencies rated as immediate jeopardy situations—in which residents are potentially or actually at risk of dying or being seriously injured. The remaining two severity classifications are actual harm and potential for more than minimal harm. The scope of deficiencies—or the number of residents potentially or actually affected by the deficient care—may be rated as isolated, pattern, or widespread. VA policy requires that all VAMCs be accredited by The Joint Commission. As part of the accreditation process for a VAMC, which occurs on average every 3 years, The Joint Commission surveys and accredits any CLC associated with the VAMC. VA requires CLCs to meet The Joint Commission long-term care standards. CLCs are also subject to periodic reviews by VA’s OIG. VA headquarters established a process for responding to deficiencies identified at CLCs during the 2007 and 2008 reviews. This process, which requires CLCs to submit corrective action plans addressing LTCI- identified deficiencies—such as how CLCs will address a lack of competent nursing staff and a failure to provide a sanitary and safe living environment—is also being used during the 2010 and 2011 LTCI reviews. However, because of weaknesses in the process, VA headquarters cannot provide reasonable assurance that deficiencies that could potentially affect the quality of care and quality of life of residents are resolved. VA headquarters established a process for responding to LTCI-identified deficiencies that requires each CLC to develop a corrective action plan addressing all deficiencies identified and submit it to VA headquarters within 30 days of receiving an LTCI report. The plans may include actions such as training CLC staff on clinical policies and procedures or implementing nursing and interdisciplinary rounds to monitor the clinical issues related to the deficiencies. VA headquarters officials review each corrective action plan to determine whether the actions can be expected to correct all identified deficiencies and whether the time frames for completing the actions are reasonable. The officials then provide each CLC feedback by telephone, discussing any revisions to the corrective action plans that may be necessary. The officials document these discussions using hand-written notes on hard copies of CLCs’ corrective action plans, which are not shared with VA networks and CLCs. VA headquarters officials told us they may schedule additional telephone calls with CLCs when significant revision of a corrective action plan is necessary or if the officials want an update on the implementation of the plan. For deficiencies identified in the 2007 and 2008 LTCI reviews, the documentation showed that officials had at least two telephone calls with 29 of the 116 CLCs reviewed. Three of these 29 CLCs received more than two follow-up calls. When additional calls were made, VA headquarters required the CLCs to submit an updated corrective action plan. While VA’s process requires that all deficiencies identified be addressed, it gives priority to deficiencies at the immediate jeopardy or actual harm levels. When LTCI review teams identify such deficiencies during a survey, they are required to notify VA headquarters and the relevant VAMC. LTCI identified immediate jeopardy or actual harm deficiencies at 25 of the 116 CLCs (about 22 percent) reviewed in 2007 and 2008, and at 10 of the 67 CLCs (about 15 percent of the CLCs) reviewed in 2010 and 2011 as of March 31, 2011. After the 2007 and 2008 LTCI reviews, VA headquarters officials analyzed the deficiencies from the 116 reviews and from the analysis developed eight clinical high-risk categories. According to these officials, the eight categories, which included medication management, infection control, and peripherally inserted central catheter (PICC) lines, posed the greatest risk to residents’ health and safety. (See table 1.) The officials then implemented a national training and education initiative to address the eight categories. VA headquarters convened a workgroup that developed national training guidelines and checklists for evaluating CLC staff competencies in each of the eight categories. The workgroup included representatives from Geriatrics and Extended Care, the Office of Nursing Services, Nutrition and Food Services, and the Infectious Diseases Program Office. A VA headquarters official told us that the workgroup included the last three offices because the majority of LTCI-identified deficiencies were related to nursing, nutrition, and infection control issues. VA headquarters provided the VA networks and CLCs with the national guidelines and checklists and required CLCs to incorporate them into their training and education policies. VA headquarters required CLCs to report whether they had met the following four requirements for each of the eight clinical high-risk categories: (1) establish CLC policies, (2) adopt procedures for implementing the policies, (3) design an assessment to observe staff proficiency in providing care matching the established procedure, and (4) establish a plan for ongoing training and assessment of staff, including new staff. In addition, CLCs were required to directly observe staff providing care to CLC residents and report the percentage of staff that had been observed as being proficient in the procedures necessary to comply with CLCs’ policies for each of the eight clinical high-risk categories. If CLCs did not meet all four requirements for each category or had observed less than 90 percent of their staff as proficient in providing care in any one of the clinical high-risk categories, they were to develop and submit corrective action plans to VA headquarters. According to the documentation we reviewed, in most categories, the majority of CLCs indicated that they had met the requirements of the national training and education initiative. However, in every category there were CLCs that did not meet these requirements and had to submit a corrective action plan. For example, for the medication management clinical high-risk category, 14 of the 132 CLCs submitted a corrective action plan because they either were not in compliance with the four requirements or had not observed at least 90 percent of their staff as being proficient in providing care. After LTCI’s 2010 and 2011 reviews of VA’s CLCs are complete, VA headquarters plans to analyze the deficiencies identified by LTCI. To facilitate the analysis, VA headquarters is working with LTCI to track and note trends with regard to deficiencies on a quarterly basis. LTCI provides quarterly reports to VA headquarters, which include data on which deficiencies are the most frequently identified nationally. For each CLC, these reports include data on the total number of deficiencies identified and the categories in which the identified deficiencies fall. VA headquarters officials expect that these quarterly reports will facilitate the identification of national areas for improvement as well as help them review CLCs’ performance on the LTCI reviews over time. When responding to LTCI-identified deficiencies, VA headquarters does not always maintain clear and complete documentation of the feedback it provides to CLCs regarding their corrective action plans. In addition, VA headquarters does not require VA networks to report on the status of CLCs’ implementation of their corrective action plans or to verify CLCs’ self-reported compliance with the requirements of the national training and education initiative. Without the ability to determine whether CLCs appropriately responded to feedback, fully implemented their corrective action plans from the 2007 and 2008 LTCI reviews, or fully complied with requirements of the national training and education initiative, and without the ability to determine the status of corrective action plans that CLCs are implementing during LTCI’s 2010 and 2011 reviews, VA headquarters does not have reasonable assurance that LTCI-identified deficiencies are resolved. Lack of clear and complete documentation of feedback. VA headquarters does not always maintain clear and complete documentation of the feedback it provides CLCs about their corrective action plans, which is not consistent with good management practices as outlined in federal internal control standards. According to these standards, internal control activities, such as VA headquarters’ feedback, should be clearly and completely documented in a manner that is accurate, timely, and helps provide reasonable assurance that program objectives are being achieved. VA headquarters uses an unsystematic approach for documenting the feedback it provides to CLCs regarding their corrective action plans. The approach relies solely on hard copies of CLCs’ action plans that have hand-written notes on them, which are not shared with the VA networks and CLCs, to document the feedback provided during VA headquarters’ telephone calls with CLCs. We found that this approach did not always result in clear—that is, understandable to anyone not involved in the telephone feedback calls—and complete documentation. In particular, the documentation we reviewed did not always clearly and completely indicate the specific feedback provided to CLCs, including actions VA headquarters advised CLCs to take to address weaknesses with their corrective action plans. For example, for one CLC we obtained two corrective action plans from VA headquarters. One was an older action plan and the other was a revised action plan. The older action plan contained no notes or any indication of the content of VA headquarters’ feedback that resulted in the revised action plan, so we were unable to independently determine whether the revised action plan addressed VA headquarters’ feedback. In addition, we found that the plans for 19 of the 50 2007 and 2008 CLC corrective action plans that we reviewed—or about 38 percent of the plans—lacked any notes documenting the feedback that VA headquarters gave CLCs on the telephone calls. Lack of reporting requirement for VA networks. VA headquarters does not require its networks to report on the status of CLCs’ implementation of their corrective action plans, and VA headquarters does not routinely schedule additional telephone calls with CLCs following the submission of initial corrective action plans and VA’s initial telephone calls. For example, VA headquarters held additional telephone calls with only 25 percent of CLCs following the 2007 and 2008 LTCI reviews, and 15 percent of the CLCs following the 2010 and 2011 LTCI reviews, as of March 31, 2011. Therefore, VA headquarters does not know whether CLCs fully implemented their plans and corrected all LTCI-identified deficiencies. Federal standards for internal control state that the findings of reviews should be promptly resolved and that information on the status of the findings should be communicated to management so that management can provide reasonable assurance that a program is achieving its objectives—in this case, that CLCs are providing quality care and maintaining veterans’ quality of life. VA headquarters officials told us that beyond the initial telephone calls with CLCs, VA headquarters does not receive any additional information from CLCs regarding the implementation status of their corrective action plans. Rather, VA headquarters officials expect the findings of the 2010 and 2011 LTCI reviews will help them determine whether CLCs resolved all deficiencies identified by LTCI in 2007 and 2008—2 or 3 years after the deficiencies were first identified. Lack of verification requirement for national initiative. We found that VA headquarters relied on self-reported information from CLCs regarding (1) compliance with all four requirements for each of the eight clinical high-risk categories and (2) the percentage of staff that were observed to be proficient in treatments and procedures associated with the categories. VA headquarters did not specify to its networks that they should verify the accuracy of CLCs’ self-reported information. Reliance on self-reported information is inconsistent with federal standards for internal control specifying that management should be able to provide reasonable assurance about the accuracy of data—in this case, that VA networks verify the accuracy of CLCs’ self-reported information. Although we cannot generalize to all networks, neither of the two VA networks we visited requested documentation to verify CLCs’ self-reported information for the national training and education initiative. Further, the 2010 and 2011 LTCI reviews indicate that some CLCs are not in compliance with the requirements for the eight clinical high-risk categories stemming from the 2007 and 2008 reviews. For example, a CLC reported to VA headquarters that by June 2009 it would have a policy in place for training and educating its staff on PICC lines—one of the eight clinical high-risk categories. However, when LTCI reviewed this CLC in 2010, it found that this CLC had failed to provide proper care and treatment when administering medication to a resident through a PICC line. When LTCI asked to see the CLC’s policy related to PICC lines, the CLC’s staff stated that the CLC did not have one. In addition to LTCI reviews, VA headquarters obtains information about CLCs from a variety of other sources that could be used to more comprehensively identify risks associated with the care and quality of life of CLC residents. VA headquarters does not analyze all of these sources, and for those sources it does analyze, VA evaluates each source in isolation without comparing the information it receives across all available sources to identify major or commonly cited risks and trends. As a result, VA headquarters’ current approach to identifying risks in CLCs may result in missed opportunities to detect patterns and trends in information about the quality of care and quality of life within a CLC or across many CLCs. Without considering information from all available sources and comparing it across different sources, VA headquarters cannot adequately identify and manage risks in CLCs. We found that VA headquarters receives information about the quality of care and quality of life in CLCs from at least nine different sources. The type of information VA headquarters receives from each of these sources, and how often the agency receives it, varies. The nine sources of information about CLCs are the following:  LTCI. Conducts annual unannounced reviews that assess the extent to which CLCs follow 176 federal long-term care standards. LTCI review teams observe the delivery of care for a sample of residents in order to examine such areas as medication management, infection control practices, and respect for residents’ rights and dignity. LTCI provides VA headquarters a report of all deficiencies identified. VA headquarters then shares the report with the network and the reviewed CLC. The CLC is expected to correct identified deficiencies.  The Joint Commission. Performs accreditation surveys every 3 years, on average, assessing CLCs’ compliance with 227 long-term care standards, such as infection control practices and resident assessments. When The Joint Commission surveyors find noncompliance, they determine whether a systemic problem exists by assessing the CLC’s established policies and processes. This determination is the basis for whether CLCs are found deficient in a long-term care standard. VA networks and CLCs receive survey reports from The Joint Commission, which identify specific deficiencies. CLCs are required to resolve the deficiencies within certain time frames in order to maintain accreditation.  OIG. Performs its Combined Assessment Program reviews at VAMCs, including CLCs, about every 3 years. Under this program, OIG reviews selected VAMC activities, including CLC activities, to assess the effectiveness of patient care administration (the process of planning and delivering patient care) and quality management (the process of monitoring quality of care to identify and correct harmful and potentially harmful practices and conditions). CLCs typically are part of each Combined Assessment Program review. Upon completion of each review, OIG issues a report to VA headquarters, the network, and the VAMC, which identifies the VAMC’s deficiencies, including any deficiencies identified in the CLC. VA requires VAMCs, including CLCs, to fully resolve deficiencies within a year of the completion of a Combined Assessment Program review.  VA Office of the Medical Inspector (OMI). Conducts investigations to determine the validity of allegations made by complainants regarding the care provided to veterans, including residents of CLCs. If an allegation is validated, the VAMC, including the CLC, is required to address any recommendations made by OMI.  System-wide Ongoing Assessment and Review Strategy (SOARS). Performs reviews of VAMCs, including CLCs, every 3 years to evaluate readiness for some external and internal reviews, such as those by The Joint Commission and OIG. It is a consultative program within VA designed to identify programmatic weaknesses in VAMCs, including CLCs. SOARS teams issue reports to VA networks and VAMCs, including CLCs, with recommendations based on identified deficiencies, and VAMCs and CLCs are expected to implement the recommendations.  Quality Measures and Quality Indicators. Report the percentage of residents in a CLC who have certain conditions, such as a pressure ulcer, or residents who are at risk for developing certain conditions, such as CLC residents who have limited mobility and are at risk of developing a pressure ulcer. CLCs periodically assess residents and enter information about their conditions into a database, which automatically calculates percentage scores for 24 categories of quality measures and quality indicators. Data are available on an ongoing basis.  Artifacts of Culture Change Tool. Reports the extent to which CLCs provided resident-centered care. Using a standard self-assessment tool, CLCs score their own performance in certain areas, such as allowing residents to choose when they eat meals, bathe, and sleep. CLCs report their scores to VA headquarters every 6 months. Issue Briefs. Provide specific information to VA headquarters officials regarding unusual incidents, such as deaths, disasters, or anything else that happens at a VAMC, including a CLC, that might generate media interest or affect care.  Complaints. Provide information from veterans or their representatives about the quality of care or the quality of life in VAMCs, including CLCs. VA headquarters’ approach for identifying risks associated with the quality of care and quality of life of CLC residents is deficient in two respects—it does not comprehensively analyze information from all available sources, and it does not compare findings across these sources. Without analyzing information from all available sources and comparing the results, VA headquarters’ assessments of risks in CLCs are incomplete. According to federal internal control standards, management should assess the risks the agency may face from both external and internal sources. The standards state that a risk management process includes (1) comprehensively identifying risks associated with achieving an agency’s goals (for example, providing quality of care and quality of life in CLCs); (2) estimating the significance of the risks; and (3) determining actions to mitigate the risks, such as developing or clarifying policies or targeting reviews of noncompliant CLCs. VA headquarters’ current approach relies significantly on the analysis of findings from LTCI reviews of CLCs. VA headquarters also relies on analysis of the findings from The Joint Commission accreditation surveys and the Artifacts of Culture Change tool. (See app. I for a detailed description of these analyses.) While these three separate analyses enable VA headquarters to identify trends in each source of information, such as the most frequently cited deficiencies across all CLCs or the average number of deficiencies per CLC, they do not provide a complete assessment of the risks that would be identified by evaluating all nine sources. Information VA headquarters receives about the quality of care and the quality of life in CLCs from the remaining six sources—OIG, OMI, SOARS, quality measures and quality indicators, issue briefs, and complaints—could also be valuable in identifying patterns in CLC-related findings. VA headquarters officials we interviewed said they do not typically analyze information they receive about CLCs from these six sources because they do not always believe that doing so would be valuable for identifying trends and patterns regarding the quality of care and quality of life in CLCs. For example, VA headquarters officials said that they do not extract CLC-related findings from OIG Combined Assessment Program reviews because the reviews typically do not include enough CLC-related findings to warrant analysis. However, when we analyzed findings from the 77 OIG Combined Assessment Program reviews that were completed at VAMCs that have CLCs between October 1, 2009, and June 20, 2011, we found that 49 of the reviews—or about 64 percent—included at least one finding related to the quality of care or quality of life in a CLC. Without analyzing information from all available sources about the quality of care and quality of life in CLCs, VA headquarters’ assessments of risks in CLCs are incomplete. VA headquarters does not compare information across all sources to identify patterns of findings for an individual CLC, CLCs within a network, or all CLCs nationwide. Rather, VA headquarters analyzes the findings from three sources separately to identify trends in the findings. However, it does not compare the findings from one source to the findings from the other sources. One source’s findings, in isolation, may not present the significance of certain risks, especially those that may suggest immediate risks for residents within a given CLC or across all CLCs. However, if related information that VA headquarters receives was compared across different sources concurrently, VA headquarters officials would be better positioned to recognize the risks to CLC residents. One example we identified of the benefit from considering the usefulness of multiple information sources is in the area of pain management. In this regard, we found that in fiscal years 2009 and 2010, VA headquarters’ quality indicator and quality measure data showed that about 25 percent of all long-stay CLC residents and 40 percent of all short-stay CLC residents experienced moderate to severe pain. In June 2007, OMI investigated allegations about the quality of care for a resident at one CLC and found, among other things, that the CLC had failed to adequately manage the resident’s pain. Three months later, in September 2007, LTCI conducted a review of the same CLC and found that staff were not performing assessments after administering pain medications to determine whether the medication had been effective. In November 2009, the OIG visited the same CLC as part of a Combined Assessment Program review and found that staff had not documented pain medication effectiveness within the required time frames nearly two-thirds of the time that pain medications were administered. If VA had comprehensively analyzed OMI information—which it does not analyze—along with LTCI information that was available in 2007 and compared this information with the information from the 2009 OIG review and quality indicator and quality measure data, VA headquarters would have been better informed about the significance of the risks and what actions might have helped to mitigate the risks of pain medication management problems at this CLC. The 46,000 elderly and disabled veterans annually who are residents in VA’s CLCs depend on VA to provide them with quality care and maintain their quality of life. The weaknesses in VA headquarters’ process for resolving LTCI-identified deficiencies put veterans at risk of persistent deficiencies that could become more serious over time. VA headquarters officials told us that they intend to use the findings of the 2010 and 2011 LTCI reviews to determine whether deficiencies that were first identified by LTCI 2 to 3 years earlier have been resolved. However, VA headquarters cannot provide reasonable assurance of resolution of deficiencies because it does not (1) clearly document the feedback that it provides to CLCs about corrective action plans for LTCI-identified deficiencies, (2) require VA networks to report on the status of CLCs’ implementation of action plans, and (3) verify CLCs’ self-reported information about their implementation of the requirements of the national training and education initiative. Unaddressed, these weaknesses in VA headquarters’ process for responding to LTCI-identified deficiencies may compromise the quality of care and quality of life of veterans in CLCs. Even though VA headquarters receives information about the quality of care and quality of life in CLCs from LTCI and a variety of other sources, the agency does not comprehensively analyze all available information to identify and manage risks in CLCs. Because VA headquarters does not analyze information from all available sources, it may be missing opportunities to detect trends and patterns in findings from different information sources for a CLC, CLCs within a network, or all CLCs. Without comprehensively analyzing information from all available sources, VA headquarters cannot fully identify risks in CLCs, estimate the significance of the risks, or take actions to mitigate them. To provide reasonable assurance that LTCI-identified deficiencies are resolved and that veterans receive quality care and maintain their quality of life in VA CLCs, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following two actions:  For reviews conducted by LTCI under the current contract and any similar future contracts, (1) clearly and completely document the feedback provided to CLCs about their corrective action plans, (2) require VA networks to provide periodic reports on the status of CLCs’ implementation of their corrective action plans, and (3) develop and implement a process for verifying any information reported directly to VA headquarters by CLCs.  Develop and implement a process to comprehensively identify, estimate, and mitigate risks in CLCs by analyzing and comparing all available information regarding the quality of care and quality of life in CLCs. In its comments on a draft of this report, VA concurred with our recommendations and described the department’s planned actions to implement them. VA did not provide technical comments on the draft report. VA’s comments are included in appendix II. To address our recommendation that, for reviews conducted by LTCI, VA headquarters should document the feedback provided to CLCs about their corrective action plans, require VA networks to report periodically on the status of CLCs’ implementation of corrective action plans, and implement a process for verifying information CLCs report directly to VA headquarters, VA stated that it plans to develop and implement a national feedback process by the end of the second quarter of fiscal year 2012 as part of its response to results from the LTCI reviews. VA stated that the process will include having VA networks work with VAMC leadership to develop a comprehensive action plan to address areas of concern highlighted in the LTCI reviews, using a standardized template for CLCs’ corrective action plans, and requiring VAMCs to post corrective action plans on a secure database and provide updated corrective action plans at least monthly. VA indicated that the process will provide access to the status of action plans at any time and that officials from VA headquarters will provide oversight to ensure completion of action plans, including requiring VA networks to validate completion of all action items. VA, however, did not specify in its comments whether its process would include a step to document the feedback provided to CLCs about their corrective actions plans. We believe it is important for VA to document feedback provided to CLCs as part of its process To address our recommendation that VA headquarters develop and implement a process to comprehensively identify, estimate, and mitigate risks in CLCs by analyzing and comparing all available information regarding quality of care and quality of life, VA stated that it plans to design a process that will use all available information about the quality of care and quality of life in CLCs. VA indicated that this process would allow officials to analyze and compare information for individual CLCs, for CLCs within a VA network, and across all CLCs nationwide. VA intends to design this process during the first quarter of fiscal year 2012 and plans to use the process to analyze and compare CLC information and begin reporting it during the second quarter of fiscal year 2012. We commend this effort and encourage VA to proceed with these plans. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Veterans Affairs, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Description of VA headquarters analysis Identify the most frequently cited deficiencies nationally. Identify the total number of deficiencies per community living center (CLC).  Classify deficiencies identified in each CLC into 1 of 17 different groups (e.g., activities, environment, infection control, medication, etc.). Use these groups to track trends in deficiencies by VA network and by CLC.  Determine whether each CLC was substantially compliant with federal long-term care standards. Identify most frequently cited findings for two areas: 1. Direct impact: includes findings that are likely to present an 2. immediate risk to residents’ safety or quality of care; for example, resident assessment and pain management. Indirect impact: includes findings that pose less immediate risk to residents’ safety or quality of life, but could become more serious over time; for example, care planning and ensuring that corridors, hallways, and doors remain free from obstructions that would prevent exit in the event of a fire.  Calculate average number of findings per CLC.  Calculate average performance on 30 measures and indicators, by VA network and nationally; for example, percentage of long-stay residents who have experienced moderate to severe pain.  Calculate average scores, by VA network and nationally, for areas such as care practices (e.g., allowing residents to choose when they eat, bathe, and sleep) and leadership (e.g., holding regular community meetings that encourage the participation of staff, residents, and families). In addition to the contact named above, Mary Ann Curran, Assistant Director; Stella Chiang; Julie Flowers; Alison Goetsch; Aaron Holling; Alexis MacDonald; Elizabeth Morrison; and Lisa Motley were major contributors to this report. VA Long-Term Care: Trends and Planning Challenges in Providing Nursing Home Care to Veterans. GAO-06-333T. Washington, D.C.: January 9, 2006. VA Long-Term Care: Oversight of Nursing Home Program Impeded by Data Gaps. GAO-05-65. Washington, D.C.: November 10, 2004.
The Department of Veterans Affairs (VA) annually provides care to more than 46,000 elderly and disabled veterans in 132 VA-operated nursing homes, called community living centers (CLC). After media reports of problems with the care delivered to veterans in CLCs, VA contracted with the Long Term Care Institute, Inc. (LTCI), a nonprofit organization that surveys nursing homes, to conduct in-depth reviews of CLCs in 2007-2008 and again in 2010-2011. GAO was asked to evaluate VA's approach to managing veterans' quality of care and quality of life in CLCs. This report examines (1) VA's response to and resolution of LTCI-identified deficiencies and (2) information VA collects about the quality of care and quality of life in CLCs and how VA uses it to identify and manage risks. To do this work, GAO interviewed officials from VA headquarters, examined all 116 2007-2008 and 67 2010-2011 LTCI reviews, and analyzed 50 CLCs' corrective action plans for 2007-2008 and 23 such plans for 2010-2011. VA headquarters established a process for responding to deficiencies identified at CLCs during the 2007 and 2008 LTCI reviews. VA is using the process, which requires CLCs to submit corrective action plans addressing LTCI-identified deficiencies--such as how CLCs will address a lack of competent nursing staff and a failure to provide a sanitary and safe living environment--during the 2010 and 2011 LTCI reviews. On the basis of its analysis of the deficiencies identified in 2007 and 2008, VA headquarters also developed a national training and education initiative. VA headquarters officials told GAO that they plan to analyze the deficiencies identified during the 2010 and 2011 reviews and identify national areas for improvement. However, GAO found weaknesses in VA's process for responding to and resolving LTCI-identified deficiencies. First, VA headquarters does not maintain clear and complete documentation of the feedback it provides to CLCs regarding their corrective action plans. Second, VA headquarters does not require VA's networks, which oversee the operations of VA medical facilities, including CLCs, to report on the status of CLCs' implementation of corrective action plans or to verify CLCs' self-reported compliance with the requirements of the national training and education initiative. Because of these weaknesses, VA headquarters cannot provide reasonable assurance that LTCI-identified deficiencies are resolved. For example, without requiring networks to report on the status of CLCs' implementation of their corrective action plans, VA headquarters cannot determine whether CLCs' corrective action plans are fully implemented. Unaddressed, weaknesses in VA headquarters' process for responding to LTCI-identified deficiencies may compromise the quality of care and quality of life of veterans in CLCs. VA headquarters' current approach to identifying risks associated with the quality of care and quality of life of CLC residents does not comprehensively analyze information from all available sources, and for the sources VA does analyze, it does not compare findings across sources. VA's approach relies significantly on the analysis of findings from LTCI reviews of CLCs. However, in addition to LTCI reviews, VA headquarters obtains information about CLCs from a variety of other sources, such as VA's Office of Inspector General (OIG), but does not analyze the information from all these other sources. Further, for the sources it does analyze, VA headquarters evaluates each source in isolation and does not compare the findings from one source with findings from the other sources. Therefore, VA headquarters' current approach to identifying risks in CLCs may result in missed opportunities to detect patterns and trends in information about the quality of care and quality of life within a CLC or across many CLCs. For example, in comparing findings from VA's Office of the Medical Inspector, OIG, LTCI, and VA's quality indicator and quality measure data for one CLC, GAO found a pattern of deficiencies related to pain management. Without considering information from all available sources and comparing it across sources, VA headquarters cannot fully identify risks in CLCs, estimate the significance of the risks, or take actions to mitigate them. GAO recommends that VA document feedback to CLCs and require periodic status reports about corrective action plan implementation, and implement a process to comprehensively identify and manage risks to residents in CLCs by analyzing and comparing information about residents' quality of care and quality of life. In its comments on a draft of this report, VA concurred with these recommendations.
Medicare provides health insurance for about 37 million elderly and disabled individuals. This insurance is available in two parts: Part A covers inpatient hospital care and is financed exclusively from a payroll tax. Part B coverage includes physician services, outpatient hospital services, and durable medical equipment. Part B services are financed from an earmarked payroll tax and from general revenues. The Social Security Act requires that Medicare pay only for services that are reasonable and necessary for the diagnosis and treatment of a medical condition. HCFA contracts with private insurers such as Blue Cross and Blue Shield plans, Aetna, and CIGNA insurance companies to process Medicare claims and determine whether the services are reasonable and necessary. The program was designed this way in part to protect against undue government interference in medical practice. Thus, despite Medicare’s image as a national program, each of the 29 Medicare contractors that process part B claims for physicians’ services generally establishes its own medical necessity criteria for deciding when a service is reasonable and necessary. Contractors do not review each of the millions of Medicare claims they process each year to determine if the services are medically necessary. Instead, contractors review a small percentage of claims, trying to focus on medical procedures they consider at high risk for excessive use. Contractor budgets limit the number of claims contractors can review, and over the last several years, both contractor budgets and HCFA requirements for prepayment review have been decreasing. In 1991, HCFA required contractors to review 15 percent of all claims before payment, while in 1995, contractors are only required to review 4.6 percent. Since 1993, HCFA has required contractors to use a process called focused medical review (FMR) to help them decide which claims to review. Under the FMR process, each contractor analyzes its claims to identify procedures where local use is aberrant from the national average use. Beginning in fiscal year 1995, HCFA has required each contractor to select at least 10 aberrant procedures identified through FMR and develop medical policies for those procedures. The contractors are required to work with their local physician community to define appropriate medical necessity criteria. This arrangement allows contractors to take local medical practices into consideration when establishing criteria for reviewing claims. Once physicians have had an opportunity to comment on a medical policy, the contractor publishes the final criteria. Each contractor generally decides which medical procedures to target for review and what types of corrective actions to implement to prevent payments for unnecessary services. Contractors currently concentrate on educating physicians about local medical policies, hoping to decrease the number of claims submitted that do not meet the published medical necessity criteria. Contractors also use computerized prepayment reviews, called screens, to check claims against the medical necessity criteria in medical policies. When screens identify claims that do not meet the criteria, two alternative actions are possible: first, autoadjudicated screens may deny the claim automatically; second, all other screens may suspend the claim for review by claims examiners, who may request additional documentation from the physician before deciding to pay or deny the claim. Autoadjudicated screens usually compare the diagnosis on the claim with the acceptable diagnostic conditions specified in the corresponding medical policy. For example, an autoadjudicated screen for a chest X ray would pay the claim if the patient diagnosis was pneumonia but deny the claim if the only patient diagnosis was a sprained ankle. Because this type of screen is entirely automated, it can be applied to all the claims for a specific procedure at a lesser cost than reviewing claims manually. This type of screen is most effective for denying claims that do not meet some basic set of medical necessity criteria. Claims denied by these screens can be resubmitted by providers or appealed. As shown in figure 1, claims that pass these basic criteria may be further screened against more complex medical criteria to identify claims that warrant manual review. Most of the contractors we surveyed routinely pay claims for procedures suspected to be widely overused without first screening those claims against medical necessity criteria. We looked at six groups of procedures that providers frequently perform on patients who lack medical symptoms appropriate for the procedures. These procedures also rank among the 200 most costly services in terms of total Medicare payments and accounted for almost $3 billion in Medicare payments in 1994. (See table 1 below.) Four of the procedures—echocardiography, eye examinations, chest X rays, and duplex scans of extracranial arteries—are noninvasive diagnostic tests. Colonoscopy can be either diagnostic or therapeutic, and YAG laser surgery is sometimes used to correct cloudy vision following cataract surgery. In the first quarter of fiscal year 1995 (Oct. 1-Dec. 31, 1994), we surveyed 17 contractors to determine whether they were using any type of medical necessity prepayment screens to review claims for these six groups of procedures. As shown in table 2, the use of prepayment screens among the contractors was not uniform, and for each of the six procedures fewer than half the 17 contractors were using such screens. For each group of products in our study, we found the following: Only 7 of the 17 contractors we surveyed had prepayment screens to review echocardiography for medical necessity, even though echocardiography is often performed on patients with no specific cardiovascular disorders. Ten contractors lacked such screens, even though echocardiography is the most costly diagnostic test in terms of total Medicare payments and despite an increase of over 50 percent in the use of the echocardiography procedures listed in table 1 between 1992 and 1994. Only 6 of the 17 contractors used prepayment screens to prevent payment for medically unnecessary eye examinations. These contractors have medical necessity criteria to deny claims for routine eye examinations and to allow payments only for certain conditions, such as cataracts, diabetes, and hypertension. Only 6 of the 17 contractors had prepayment screens to review chest X ray claims for medical necessity, although HCFA had alerted Medicare contractors that providers frequently bill for chest X rays that are not warranted by medical symptoms and are thus medically unnecessary. Only 6 of the 17 contractors had medical necessity prepayment screens to review colonoscopy claims. In 1991, HHS’ OIG reported that nationwide almost 8 percent of colonoscopies paid by Medicare were not indicated by diagnosis or medical documentation. Only 3 of the 17 contractors had prepayment screens for YAG laser surgery even though federal guidelines exist that indicate the diagnostic conditions for performing this surgery. Also, at a national meeting of Medicare contractors in 1994, HCFA officials discussed the need to avoid paying for unnecessary YAG laser surgery following cataract removal. Only 8 of the 17 contractors had implemented prepayment screens for duplex scans even though HCFA had alerted Medicare contractors that providers commonly bill for noninvasive vascular tests such as duplex scans without adequately documenting the patient’s medical symptoms. A primary reason all contractors do not screen claims for nationally overused procedures is that, following HCFA’s instructions for FMR, contractors have been targeting procedures that are overused locally, based on comparisons with national average use. The shortcomings of this approach are discussed later in this report. Our survey of the 17 contractors represents a snapshot of the use of prepayment screens for these procedures in the first quarter of fiscal year 1995. Typically, contractors turn screens on and off depending on their local circumstances. For example, one contractor began using a screen for echocardiography in March 1995, and another contractor implemented screens for chest X rays and eye examinations in January 1995 because these procedures were overused locally. By contrast, one contractor discontinued using an autoadjudicated screen for eye examinations in February 1995 because the diagnostic criteria for payment in the screen were considered too narrow. Nonetheless, these fluctuations in contractors’ use of screens do not reflect a coordinated approach to screening nationally overused procedures. Seven large Medicare contractors paid millions of dollars in claims for services that may have been unnecessary. These contractors did not use diagnostic medical criteria to screen claims for some of the six groups of procedures in our study. The claims paid for these services included a range of patient diagnoses that did not meet the criteria established by other contractors. For example, a chest X ray was paid for a patient with a diagnosis of injuries to the hand and wrist, an echocardiogram was paid for a patient with a diagnosis of chronic conjunctivitis, and a therapeutic colonoscopy examination was paid for a patient with a mental health diagnosis of hysteria. If the seven contractors had used autoadjudicated diagnostic screens for the six groups of procedures, they would have denied between $38 million and $200 million in claims for services in 1993, as shown in table 3. The range of estimated payments for claims that would have been denied reflects differences among contractors’ criteria for identifying medically unnecessary services. Although different contractors had screens for the same procedure, they used different diagnoses to determine medical necessity. For example, a colonoscopy screen we used from one contractor paid claims with a diagnosis of gastritis, while another contractor’s screen denied such claims. Because of these differences among the contractors’ screens, we applied screens from two or three different contractors for each group of procedures, except for YAG laser surgery. Thus, our test results show a range of estimated payments for claims that would have been denied, depending on the medical necessity criteria used. The tables in appendix II list the estimated payments for claims that would have been denied by each of the tested screens. The seven contractors we reviewed were among the largest in terms of the number of claims processed, accounting for about 37 percent of all Medicare part B claims, and almost 38 percent of all the claims for the six groups of procedures in our study. To estimate the paid claims that would have been denied, we applied autoadjudicated screens developed by several contractors in our survey to a sample of the 1993 claims paid by the seven contractors. We only applied these screens if the tested contractor did not have a medical necessity diagnostic screen of its own in place in 1993 for the specific procedure tested. We used autoadjudicated screens because decisions to pay and deny claims based on medical necessity criteria are automated and, therefore, do not require additional medical judgment. Appendix I provides additional details on our methodology. When claims are denied by prepayment screens, the billing physician can (1) resubmit the claim with additional or corrected information or (2) appeal the denial. In either case, the contractors may ultimately pay claims that they have initially denied. Contractors’ claims processing systems generally do not track the claims denied by autoadjudicated prepayment screens to determine if they are resubmitted or appealed and then paid. However, based on a limited analysis of claims denied by contractors with autoadjudicated screens, we estimate that about 25 percent of the denied claims were ultimately paid. Assuming that the 25-percent rate is typical for autoadjudicated screens, about 75 percent of the payments in table 3, or between $29 million and $150 million, were for services that would be considered unnecessary using the criteria established by various contractors. Our estimates of payments for unnecessary services involve only six groups of procedures and cannot be statistically generalized beyond the 7 contractors included in our analysis. However, all 29 contractors—not just the 7 whose claims we reviewed—operate under FMR requirements designed to correct local rather than national overutilization problems. Therefore, the other 22 contractors also may lack screens for some of these procedures and, hence, may have paid millions of dollars in claims for services that should have been denied. For widely overused procedures such as the six we tested, autoadjudicated screens can be a low-cost, efficient way to screen millions of claims against basic medical necessity criteria. Contractor officials said that these screens are much less expensive to implement than screens that suspend claims for manual review. Consequently, as funding for program safeguards declines, autoadjudicated screens can be used to maintain or even increase the number of claims reviewed. Moreover, for procedures where the medical review decisions can be automated, autoadjudicated screens can quickly identify and deny claims where the patient diagnosis is inconsistent with the procedure performed. In contrast, when claims examiners manually review claims, the risk exists that the medical necessity criteria may be misinterpreted and applied inconsistently. However, for certain procedures or medical policies, autoadjudicated screens may not be appropriate. For example, some medical policies are not easily defined with diagnostic codes and require manual review of documentation, such as medical records, to determine if a service is medically necessary. Denying claims using autoadjudicated or other prepayment screens can increase administrative costs if providers frequently resubmit denied claims or appeal the denials. Contractor officials said that these costs can be minimized if providers are educated to bill appropriately in the first place. By combining direct provider education with screens that enforce agreed upon medical criteria, contractors can, over time, reduce the number of claims submitted for unnecessary services. HCFA does not have a national strategy for using prepayment screens to deny payments for unnecessary services among Medicare’s most highly overused procedures. HCFA does periodically alert contractors about some of these procedures at semiannual national contractor meetings and through occasional bulletins. However, the agency does not identify widely overused procedures in a systematic manner. Moreover, the agency does not ensure that contractors implement prepayment screens or other corrective actions for these procedures. Medicare legislation does not preclude HCFA from requiring its contractors to screen claims for nationally overused procedures. However, HCFA has chosen to avoid the appearance of interfering in local medical practice. HCFA usually does not establish medical policies or tell the contractors which procedures warrant medical policies or prepayment screens.Instead, HCFA relies primarily on the contractors’ local FMR efforts to identify and prevent Medicare payments for unnecessary services. This process, according to HCFA officials, allows contractors to take medical practice into consideration when making medical necessity determinations. Although FMR can work well for overutilization problems that are truly local, the process is not designed to address nationwide overutilization of a medical procedure. The national average use of a procedure generally serves as a benchmark for identifying local overutilization problems, but the benchmark itself may already be inflated by millions of dollars in payments for unnecessary services. For example, in several states the use of echocardiograms greatly exceeded the 1992 national average of 101 services per 1,000 beneficiaries. Some of the contractors servicing those states have designed and implemented prepayment screens for this procedure. Meanwhile, other contractors targeted different procedures and allowed unconstrained use of echocardiograms. This focus on local overuse may be one of the factors that led to a national 12-percent increase in echocardiography use by 1994—and a new benchmark of 113 echocardiograms per 1,000 beneficiaries. HCFA can take a more active role in controlling spending for widely overused procedures without intruding on the contractors’ responsibilities to establish their own prepayment screens. HCFA has an oversight responsibility to monitor and evaluate contractors’ screens and other efforts to prevent payments for unnecessary services. Yet HCFA does not know (1) which contractors have diagnostic screens for which medical procedures, (2) the medical necessity criteria used in these screens, or (3) the effectiveness of the screens in denying claims for unnecessary services. Furthermore, without this information HCFA cannot identify best practices and promote approaches such as autoadjudicated medical necessity screens where they can be a cost-effective alternative or complement to screens that flag claims for manual review. HCFA funded a central database on local medical policies, but this resource is not being effectively used. HCFA has encouraged the contractors to use the database to research other contractors’ medical policies before drafting their own. However, according to some contractors, the usefulness of the database is limited because it is not regularly updated. Moreover, HCFA has not taken the initiative to use the database to evaluate the contractors’ medical policies and identify those worthy of consideration by all contractors for controlling widely overused procedures. HCFA can also encourage greater use of medical necessity criteria for widely overused procedures by providing contractors with more model medical policies. About 2 years ago, HCFA established clinical workgroups composed of contractor medical directors to develop model medical policies that the contractors can adapt for local use. Specifically, contractors can work with their local medical community to review model policies, adapt them to reflect local medical practice, and implement them in prepayment screens. This has been an important step in promoting greater efficiency in developing local medical policies. However, since the workgroups’ inception, only one model policy has been published.According to HCFA and contractor officials, progress has been limited in part because HCFA often takes longer to review draft model policies than its goal of 45 days. HCFA officials said that they are considering provisions for greater use of autoadjudicated screens in a new, national claims processing system. However, full implementation of that system is scheduled for late in 1999. In addition, what types of screens will be included in the system remains unclear, as well as how the contractors will chose which screens to modify, implement, and use and how HCFA will monitor and evaluate the effectiveness of the screens. Meanwhile, HCFA continues to allow contractors to pay millions of dollars for services that may be unnecessary. While the rapid increase in Medicare costs threatens the long-term viability of the Medicare program, many Medicare part B contractors continue to routinely pay claims for widely overused services, without first determining if the services are reasonable and necessary. Even when evidence indicates that problems with payments for specific medical procedures are widespread, HCFA has not ensured that contractors help correct national problems as well as local aberrancies. More specifically, HCFA policies do not encourage contractors to reduce a national norm already inflated by millions of dollars in payments for unnecessary services. Our tests of paid claims against criteria used by some of the contractors show that millions of dollars are being paid for services that do not meet basic medical necessity criteria. Although our tests were limited to seven contractors, our survey of 17 contractors indicates that nationally, additional millions of Medicare dollars may have been paid for claims that should have been denied. Prepayment screens are an important tool in preventing payments for unnecessary services. Funding for program safeguards, such as medical policies and prepayment screens, has been declining, however, while the volume of Medicare claims is increasing. In this environment, autoadjudicated diagnostic screens offer a low-cost way to ensure that all claims for selected procedures pass a basic medical necessity test before payment. Greater use of autoadjudicated screens could complement, rather than replace, the contractors’ efforts to use FMR and other types of prepayment screens to address local overutilization problems. To forestall widespread overuse of specific medical procedures, HCFA can help the contractors much more than it has. HCFA has begun to capitalize on the knowledge and skills of the contractor medical directors by using contractor workgroups to develop model medical policies. More model policies can help contractors control spending for nationally overused procedures by providing them with generally accepted criteria for identifying and denying claims for unnecessary services. However, HCFA needs to support the efforts of the workgroups and review model policies on a more timely basis so that these efforts can succeed. Also, to exercise stronger leadership by promoting best practices, HCFA needs to collect and evaluate information on the medical criteria and prepayment screens now being used by the contractors. To help prevent Medicare payments for unnecessary services, we recommend that the Secretary of HHS direct the Administrator of HCFA to systematically analyze national Medicare claims data and use analyses conducted by HHS’ OIG and Medicare contractors to identify medical procedures that are subject to overuse nationwide; gather information on all contractors’ local medical policies and prepayment screens for widely overused procedures, evaluate their cost and effectiveness, and disseminate information on model policies and effective prepayment screens to all the contractors; and hold the contractors accountable for implementing local policies, prepayment screens (including autoadjudicated screens), or other corrective actions to control payments for procedures that are highly overused nationwide. We provided HHS an opportunity to comment on our draft report, but it did not provide comments in time to be included in the final report. However, we did discuss the contents of this report with HCFA officials from the Bureau of Program Operations, including the Director of Medical Review and the Medical Officer. In general, they agreed with our findings. We obtained written comments on our draft report from several part B contractor medical directors who serve on the Contractor Medical Director Steering Committee. We selected this committee as a focal point for obtaining contractor comments because of its role as a liaison between the contractors and HCFA and the communication network for the contractor medical directors. Their comments support our conclusions (see app. III). In summary, they suggested the development of contractor workgroups to rapidly produce model medical policies for the six groups of procedures in our study. As agreed with your office, unless you release its contents earlier, we plan no further distribution of this report for 30 days. At that time, we will send copies to other congressional committees and members with an interest in this matter, the Secretary of Health and Human Services, and the Administrator of the Health Care Financing Administration. We will also make copies available to others upon request. This report was prepared by William Reis, Assistant Director; Teruni Rosengren; Stephen Licari; Michelle St. Pierre; and Vanessa Taylor under the direction of Jonathan Ratner, Associate Director. Please call me on (202) 512-7119 or Mr. Reis on (617) 565-7488 if you or your staff have any questions about this report. We reviewed HCFA’s statutory authority and responsibilities for administering the Medicare program and HCFA’s regulations and guidance to contractors on the development of local medical policies and the implementation of prepayment screens. We also discussed HCFA’s oversight of these functions with officials at its Bureau of Program Operations. Before selecting the six groups of medical procedures included in our study, we reviewed previous GAO and HHS OIG reports, HCFA guidance, and other studies on overused medical services. We also reviewed HCFA’s list of 200 medical procedure codes, ranked by total Medicare-allowed charges, and obtained Medicare contractors’ views on procedures that are likely to be overused. Based on the information gathered from these sources, we selected six groups of procedures generally considered widely overused. Because little centralized information exists on Medicare contractors’ use of prepayment screens or the medical necessity criteria included in those screens, we contacted 17 of the 29 contractors that process Medicare part B claims for physician services. We also visited three of the Medicare contractors and attended two of the semiannual contractor medical director conferences. In the course of these contacts, we decided to limit our collection of detailed information on medical necessity criteria and prepayment screens to 17 contractors who could provide us the information we needed. To estimate the Medicare payments for unnecessary services that could be prevented by broader use of prepayment screens, we tested autoadjudicated prepayment screens on claims paid by seven contractors in six states. The seven contractors in our analysis were among the largest contractors in terms of the number of claims processed in 1993 and they did not use a medical necessity prepayment screen for some of the six groups of procedures in our study. We based our tests on data from the Medicare Physician Supplier Component of the 1993 HCFA 5 Percent Sample Beneficiary Standard Analytic File. The Physician Supplier Component contains all Medicare part B claims for a random sample of beneficiaries. Our analysis is based on all paid claims in the database for the seven contractors and the six groups of procedures in our review. For each screen and tested contractor, we estimated the services and payments that would have been denied by simulating the screen using a computer algorithm to determine the number of services in the sample that would have been denied by the screen, weighing this number to reflect the universe of services, and multiplying this weighted number by the average Medicare allowance for the procedure at the contractor. The average Medicare-allowed amount for each procedure code at each contractor in 1993 was calculated based on data from HCFA’s part B Extract Summary System. For five of the procedures, we applied two or three different autoadjudicated diagnostic screens currently used by other contractors in order to illustrate the impact of using different screens. By applying multiple screens, we were able to examine the range of services that would have been denied depending on the medical necessity criteria used. For example, one of the colonoscopy screens paid claims with a diagnosis of gastritis, while another did not. For YAG laser surgery, however, we only applied the one screen that we had identified at the time we began our analysis. We only applied a particular screen to a contractor’s claims if that contractor did not have a medical necessity diagnostic screen in place in 1993 for the specific procedure being tested. We obtained our tested screens from several of the 17 contractors in our initial survey. Some of the screens we used were obtained from one of the seven contractors that we subsequently tested. Because our estimates were based on a sample of claims, our estimates are subject to sampling error. We calculated 95-percent confidence intervals for each of our estimated payments for services that would have been denied by the tested screens. This means the chances are about 19 out of 20 that the actual payments for services that would have been denied at each of the tested contractors would fall within the range covered by our estimate, plus or minus the sampling error. Sampling errors for our estimates are included in appendix II. Some of the payments that would have been denied by the tested screens would eventually be paid if they were resubmitted with corrected or additional information or successfully appealed. Because contractors’ claims processing systems generally do not track claims denied by autoadjudicated screens to determine how many are ultimately paid, we developed our own estimates. Using the 1993 HCFA 5 Percent Sample Beneficiary Standard Analytic File, we analyzed echocardiography claims processed by one contractor and duplex scan claims processed by another contractor. In each case, the contractors used autoadjudicated screens for these services. For each contractor, we used computer programs to identify claims for the services that were denied for medical necessity in a 3-month period in 1993. We then determined whether another claim was submitted and paid for the same service, provided on the same day, for the same beneficiary, and by the same provider. Our analysis showed that 23 to 25 percent of the echocardiography and duplex scan claims denied for medical necessity were subsequently paid. Based on these results we used 25 percent as our estimate of claims denied that would ultimately be paid. The actual percentage will likely vary by type of medical procedure and the diagnostic criteria used in the screen. However, because of the costs and inefficiencies associated with denying a large percentage of services and then later reprocessing and paying those services, we believe that contractors would not be likely to continue using a prepayment screen that inappropriately denies more than 25 percent of the services. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The estimated number of and payments for denied services were derived from a 5-percent beneficiary sample of 1993 claims for each contractor. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO provided information on Medicare payments for unnecessary medical services, focusing on the: (1) extent to which Medicare contractors employ medical necessity prepayment screens for procedures that are likely to be overused; (2) potential impact of autoadjudicated prepayment screens on Medicare spending; and (3) federal government's role in reducing overused medical procedures billed to Medicare. GAO found that: (1) Medicare spending for unnecessary medical services is widespread; (2) more than half of the 17 contractors surveyed do not use prepayment screens to check whether claimed services are necessary; (3) seven of the contractors paid between $29 and $150 million for unnecessary medical services; (4) many Medicare claims are paid because contractors' criteria for identifying unnecessary medical services vary; and (5) the Health Care Financing Administration (HCFA) needs to take a more active role in promoting local medical policies and prepayment screens for overused medical procedures.